Sample records for models provide information

  1. The electronic-commerce-oriented virtual merchandise model

    NASA Astrophysics Data System (ADS)

    Fang, Xiaocui; Lu, Dongming

    2004-03-01

    Electronic commerce has been the trend of commerce activities. Providing with Virtual Reality interface, electronic commerce has better expressing capacity and interaction means. But most of the applications of virtual reality technology in EC, 3D model is only the appearance description of merchandises. There is almost no information concerned with commerce information and interaction information. This resulted in disjunction of virtual model and commerce information. So we present Electronic Commerce oriented Virtual Merchandise Model (ECVMM), which combined a model with commerce information, interaction information and figure information of virtual merchandise. ECVMM with abundant information provides better support to information obtainment and communication in electronic commerce.

  2. DEVELOPMENT OF A LAND-SURFACE MODEL PART I: APPLICATION IN A MESOSCALE METEOROLOGY MODEL

    EPA Science Inventory

    Parameterization of land-surface processes and consideration of surface inhomogeneities are very important to mesoscale meteorological modeling applications, especially those that provide information for air quality modeling. To provide crucial, reliable information on the diurn...

  3. The PDS4 Information Model and its Role in Agile Science Data Curation

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D.

    2017-12-01

    PDS4 is an information model-driven service architecture supporting the capture, management, distribution and integration of massive planetary science data captured in distributed data archives world-wide. The PDS4 Information Model (IM), the core element of the architecture, was developed using lessons learned from 20 years of archiving Planetary Science Data and best practices for information model development. The foundational principles were adopted from the Open Archival Information System (OAIS) Reference Model (ISO 14721), the Metadata Registry Specification (ISO/IEC 11179), and W3C XML (Extensible Markup Language) specifications. These provided respectively an object oriented model for archive information systems, a comprehensive schema for data dictionaries and hierarchical governance, and rules for rules for encoding documents electronically. The PDS4 Information model is unique in that it drives the PDS4 infrastructure by providing the representation of concepts and their relationships, constraints, rules, and operations; a sharable, stable, and organized set of information requirements; and machine parsable definitions that are suitable for configuring and generating code. This presentation will provide an over of the PDS4 Information Model and how it is being leveraged to develop and evolve the PDS4 infrastructure and enable agile curation of over 30 years of science data collected by the international Planetary Science community.

  4. Enabling interoperability in planetary sciences and heliophysics: The case for an information model

    NASA Astrophysics Data System (ADS)

    Hughes, J. Steven; Crichton, Daniel J.; Raugh, Anne C.; Cecconi, Baptiste; Guinness, Edward A.; Isbell, Christopher E.; Mafi, Joseph N.; Gordon, Mitchell K.; Hardman, Sean H.; Joyner, Ronald S.

    2018-01-01

    The Planetary Data System has developed the PDS4 Information Model to enable interoperability across diverse science disciplines. The Information Model is based on an integration of International Organization for Standardization (ISO) level standards for trusted digital archives, information model development, and metadata registries. Where controlled vocabularies provides a basic level of interoperability by providing a common set of terms for communication between both machines and humans the Information Model improves interoperability by means of an ontology that provides semantic information or additional related context for the terms. The information model was defined by team of computer scientists and science experts from each of the diverse disciplines in the Planetary Science community, including Atmospheres, Geosciences, Cartography and Imaging Sciences, Navigational and Ancillary Information, Planetary Plasma Interactions, Ring-Moon Systems, and Small Bodies. The model was designed to be extensible beyond the Planetary Science community, for example there are overlaps between certain PDS disciplines and the Heliophysics and Astrophysics disciplines. "Interoperability" can apply to many aspects of both the developer and the end-user experience, for example agency-to-agency, semantic level, and application level interoperability. We define these types of interoperability and focus on semantic level interoperability, the type of interoperability most directly enabled by an information model.

  5. Choosing a Model of Maternity Care: Decision Support Needs of Australian Women.

    PubMed

    Stevens, Gabrielle; Miller, Yvette D; Watson, Bernadette; Thompson, Rachel

    2016-06-01

    Access to information on the features and outcomes associated with the various models of maternity care available in Australia is vital for women's informed decision-making. This study sought to identify women's preferences for information access and decision-making involvement, as well as their priority information needs, for model of care decision-making. A convenience sample of adult women of childbearing age in Queensland, Australia were recruited to complete an online survey assessing their model of care decision support needs. Knowledge on models of care and socio-demographic characteristics were also assessed. Altogether, 641 women provided usable survey data. Of these women, 26.7 percent had heard of all available models of care before starting the survey. Most women wanted access to information on models of care (90.4%) and an active role in decision-making (99.0%). Nine priority information needs were identified: cost, access to choice of mode of birth and care provider, after hours provider contact, continuity of carer in labor/birth, mobility during labor, discussion of the pros/cons of medical procedures, rates of skin-to-skin contact after birth, and availability at a preferred birth location. This information encompassed the priority needs of women across age, birth history, and insurance status subgroups. This study demonstrates Australian women's unmet needs for information that supports them to effectively compare available options for model of maternity care. Findings provide clear direction on what information should be prioritized and ideal channels for information access to support quality decision-making in practice. © 2015 Wiley Periodicals, Inc.

  6. The Information a Test Provides on an Ability Parameter. Research Report. ETS RR-07-18

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2007-01-01

    In item-response theory, if a latent-structure model has an ability variable, then elementary information theory may be employed to provide a criterion for evaluation of the information the test provides concerning ability. This criterion may be considered even in cases in which the latent-structure model is not valid, although interpretation of…

  7. Virtual Model Validation of Complex Multiscale Systems: Applications to Nonlinear Elastostatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oden, John Tinsley; Prudencio, Ernest E.; Bauman, Paul T.

    We propose a virtual statistical validation process as an aid to the design of experiments for the validation of phenomenological models of the behavior of material bodies, with focus on those cases in which knowledge of the fabrication process used to manufacture the body can provide information on the micro-molecular-scale properties underlying macroscale behavior. One example is given by models of elastomeric solids fabricated using polymerization processes. We describe a framework for model validation that involves Bayesian updates of parameters in statistical calibration and validation phases. The process enables the quanti cation of uncertainty in quantities of interest (QoIs) andmore » the determination of model consistency using tools of statistical information theory. We assert that microscale information drawn from molecular models of the fabrication of the body provides a valuable source of prior information on parameters as well as a means for estimating model bias and designing virtual validation experiments to provide information gain over calibration posteriors.« less

  8. TUNS/TCIS information model/process model

    NASA Technical Reports Server (NTRS)

    Wilson, James

    1992-01-01

    An Information Model is comprised of graphical and textual notation suitable for describing and defining the problem domain - in our case, TUNS or TCIS. The model focuses on the real world under study. It identifies what is in the problem and organizes the data into a formal structure for documentation and communication purposes. The Information Model is composed of an Entity Relationship Diagram (ERD) and a Data Dictionary component. The combination of these components provide an easy to understand methodology for expressing the entities in the problem space, the relationships between entities and the characteristics (attributes) of the entities. This approach is the first step in information system development. The Information Model identifies the complete set of data elements processed by TUNS. This representation provides a conceptual view of TUNS from the perspective of entities, data, and relationships. The Information Model reflects the business practices and real-world entities that users must deal with.

  9. Modeling patients' acceptance of provider-delivered e-health.

    PubMed

    Wilson, E Vance; Lankton, Nancy K

    2004-01-01

    Health care providers are beginning to deliver a range of Internet-based services to patients; however, it is not clear which of these e-health services patients need or desire. The authors propose that patients' acceptance of provider-delivered e-health can be modeled in advance of application development by measuring the effects of several key antecedents to e-health use and applying models of acceptance developed in the information technology (IT) field. This study tested three theoretical models of IT acceptance among patients who had recently registered for access to provider-delivered e-health. An online questionnaire administered items measuring perceptual constructs from the IT acceptance models (intrinsic motivation, perceived ease of use, perceived usefulness/extrinsic motivation, and behavioral intention to use e-health) and five hypothesized antecedents (satisfaction with medical care, health care knowledge, Internet dependence, information-seeking preference, and health care need). Responses were collected and stored in a central database. All tested IT acceptance models performed well in predicting patients' behavioral intention to use e-health. Antecedent factors of satisfaction with provider, information-seeking preference, and Internet dependence uniquely predicted constructs in the models. Information technology acceptance models provide a means to understand which aspects of e-health are valued by patients and how this may affect future use. In addition, antecedents to the models can be used to predict e-health acceptance in advance of system development.

  10. Making a difference: incorporating theories of autonomy into models of informed consent.

    PubMed

    Delany, C

    2008-09-01

    Obtaining patients' informed consent is an ethical and legal obligation in healthcare practice. Whilst the law provides prescriptive rules and guidelines, ethical theories of autonomy provide moral foundations. Models of practice of consent, have been developed in the bioethical literature to assist in understanding and integrating the ethical theory of autonomy and legal obligations into the clinical process of obtaining a patient's informed consent to treatment. To review four models of consent and analyse the way each model incorporates the ethical meaning of autonomy and how, as a consequence, they might change the actual communicative process of obtaining informed consent within clinical contexts. An iceberg framework of consent is used to conceptualise how ethical theories of autonomy are positioned and underpin the above surface, and visible clinical communication, including associated legal guidelines and ethical rules. Each model of consent is critically reviewed from the perspective of how it might shape the process of informed consent. All four models would alter the process of obtaining consent. Two models provide structure and guidelines for the content and timing of obtaining patients' consent. The two other models rely on an attitudinal shift in clinicians. They provide ideas for consent by focusing on underlying values, attitudes and meaning associated with the ethical meaning of autonomy. The paper concludes that models of practice that explicitly incorporate the underlying ethical meaning of autonomy as their basis, provide less prescriptive, but more theoretically rich guidance for healthcare communicative practices.

  11. Mutual information and the fidelity of response of gene regulatory models

    NASA Astrophysics Data System (ADS)

    Tabbaa, Omar P.; Jayaprakash, C.

    2014-08-01

    We investigate cellular response to extracellular signals by using information theory techniques motivated by recent experiments. We present results for the steady state of the following gene regulatory models found in both prokaryotic and eukaryotic cells: a linear transcription-translation model and a positive or negative auto-regulatory model. We calculate both the information capacity and the mutual information exactly for simple models and approximately for the full model. We find that (1) small changes in mutual information can lead to potentially important changes in cellular response and (2) there are diminishing returns in the fidelity of response as the mutual information increases. We calculate the information capacity using Gillespie simulations of a model for the TNF-α-NF-κ B network and find good agreement with the measured value for an experimental realization of this network. Our results provide a quantitative understanding of the differences in cellular response when comparing experimentally measured mutual information values of different gene regulatory models. Our calculations demonstrate that Gillespie simulations can be used to compute the mutual information of more complex gene regulatory models, providing a potentially useful tool in synthetic biology.

  12. Information-sharing to promote informed choice in prenatal screening in the spirit of the SOGC clinical practice guideline: a proposal for an alternative model.

    PubMed

    Vanstone, Meredith; Kinsella, Elizabeth Anne; Nisker, Jeff

    2012-03-01

    The 2011 SOGC clinical practice guideline "Prenatal Screening for Fetal Aneuploidy in Singleton Pregnancies" recommends that clinicians offer prenatal screening to all pregnant women and provide counselling in a non-directive manner. Non-directive counselling is intended to facilitate autonomous decision-making and remove the clinician's views regarding a particular course of action. However, recent research in genetic counselling raises concerns that non-directive counselling is neither possible nor desirable, and that it may not be the best way to facilitate informed choice. We propose an alternative model of information-sharing specific to prenatal screening that combines attributes of the models of informative decision-making and shared decision-making. Our proposed model is intended to provide clinicians with a strategy to communicate information about prenatal screening in a way that facilitates a shared deliberative process and autonomous decision-making. Our proposed model may better prepare a pregnant woman to make an informed choice about participating in prenatal screening on the basis of her consideration of the medical information provided by her clinician and her particular circumstances and values.

  13. Cultural Resource Predictive Modeling

    DTIC Science & Technology

    2017-10-01

    property to manage ? a. Yes 2) Do you use CRPM (Cultural Resource Predictive Modeling) No, but I use predictive modelling informally . For example...resource program and provide support to the test ranges for their missions. This document will provide information such as lessons learned, points...of contact, and resources to the range cultural resource managers . Objective/Scope: Identify existing cultural resource predictive models and

  14. Ecological Modeling Guide for Ecosystem Restoration and Management

    DTIC Science & Technology

    2012-08-01

    may result from proposed restoration and management actions. This report provides information to guide environmental planers in selection, development...actions. This report provides information to guide environmental planers in selection, development, evaluation and documentation of ecological models. A

  15. Modelling End-User of Electronic-Government Service: The Role of Information quality, System Quality and Trust

    NASA Astrophysics Data System (ADS)

    Witarsyah Jacob, Deden; Fudzee, Mohd Farhan Md; Aizi Salamat, Mohamad; Kasim, Shahreen; Mahdin, Hairulnizam; Azhar Ramli, Azizul

    2017-08-01

    Many governments around the world increasingly use internet technologies such as electronic government to provide public services. These services range from providing the most basic informational website to deploying sophisticated tools for managing interactions between government agencies and beyond government. Electronic government (e-government) aims to provide a more accurate, easily accessible, cost-effective and time saving for the community. In this study, we develop a new model of e-government adoption service by extending the Unified Theory of Acceptance and Use of Technology (UTAUT) through the incorporation of some variables such as System Quality, Information Quality and Trust. The model is then tested using a large-scale, multi-site survey research of 237 Indonesian citizens. This model will be validated by using Structural Equation Modeling (SEM). The result indicates that System Quality, Information Quality and Trust variables proven to effect user behavior. This study extends the current understanding on the influence of System Quality, Information Quality and Trust factors to researchers, practitioners, and policy makers.

  16. General Nonlinear Ferroelectric Model v. Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Wen; Robbins, Josh

    2017-03-14

    The purpose of this software is to function as a generalized ferroelectric material model. The material model is designed to work with existing finite element packages by providing updated information on material properties that are nonlinear and dependent on loading history. The two major nonlinear phenomena this model captures are domain-switching and phase transformation. The software itself does not contain potentially sensitive material information and instead provides a framework for different physical phenomena observed within ferroelectric materials. The model is calibrated to a specific ferroelectric material through input parameters provided by the user.

  17. Item Information in the Rasch Model. Project Psychometric Aspects of Item Banking No. 34. Research Report 88-7.

    ERIC Educational Resources Information Center

    Engelen, Ron J. H.; And Others

    Fisher's information measure for the item difficulty parameter in the Rasch model and its marginal and conditional formulations are investigated. It is shown that expected item information in the unconditional model equals information in the marginal model, provided the assumption of sampling examinees from an ability distribution is made. For the…

  18. Capturing information needs of care providers to support knowledge sharing and distributed decision making.

    PubMed

    Rogers, M; Zach, L; An, Y; Dalrymple, P

    2012-01-01

    This paper reports on work carried out to elicit information needs at a trans-disciplinary, nurse-managed health care clinic that serves a medically disadvantaged urban population. The trans-disciplinary model provides a "one-stop shop" for patients who can receive a wide range of services beyond traditional primary care. However, this model of health care presents knowledge sharing challenges because little is known about how data collected from the non-traditional services can be integrated into the traditional electronic medical record (EMR) and shared with other care providers. There is also little known about how health information technology (HIT) can be used to support the workflow in such a practice. The objective of this case study was to identify the information needs of care providers in order to inform the design of HIT to support knowledge sharing and distributed decision making. A participatory design approach is presented as a successful technique to specify requirements for HIT applications that can support a trans-disciplinary model of care. Using this design approach, the researchers identified the information needs of care providers working at the clinic and suggested HIT improvements to integrate non-traditional information into the EMR. These modifications allow knowledge sharing among care providers and support better health decisions. We have identified information needs of care providers as they are relevant to the design of health information systems. As new technology is designed and integrated into various workflows it is clear that understanding information needs is crucial to acceptance of that technology.

  19. System Dynamics Modeling for Supply Chain Information Sharing

    NASA Astrophysics Data System (ADS)

    Feng, Yang

    In this paper, we try to use the method of system dynamics to model supply chain information sharing. Firstly, we determine the model boundaries, establish system dynamics model of supply chain before information sharing, analyze the model's simulation results under different changed parameters and suggest improvement proposal. Then, we establish system dynamics model of supply chain information sharing and make comparison and analysis on the two model's simulation results, to show the importance of information sharing in supply chain management. We wish that all these simulations would provide scientific supports for enterprise decision-making.

  20. Influenza forecasting with Google Flu Trends.

    PubMed

    Dugas, Andrea Freyer; Jalalpour, Mehdi; Gel, Yulia; Levin, Scott; Torcaso, Fred; Igusa, Takeru; Rothman, Richard E

    2013-01-01

    We developed a practical influenza forecast model based on real-time, geographically focused, and easy to access data, designed to provide individual medical centers with advanced warning of the expected number of influenza cases, thus allowing for sufficient time to implement interventions. Secondly, we evaluated the effects of incorporating a real-time influenza surveillance system, Google Flu Trends, and meteorological and temporal information on forecast accuracy. Forecast models designed to predict one week in advance were developed from weekly counts of confirmed influenza cases over seven seasons (2004-2011) divided into seven training and out-of-sample verification sets. Forecasting procedures using classical Box-Jenkins, generalized linear models (GLM), and generalized linear autoregressive moving average (GARMA) methods were employed to develop the final model and assess the relative contribution of external variables such as, Google Flu Trends, meteorological data, and temporal information. A GARMA(3,0) forecast model with Negative Binomial distribution integrating Google Flu Trends information provided the most accurate influenza case predictions. The model, on the average, predicts weekly influenza cases during 7 out-of-sample outbreaks within 7 cases for 83% of estimates. Google Flu Trend data was the only source of external information to provide statistically significant forecast improvements over the base model in four of the seven out-of-sample verification sets. Overall, the p-value of adding this external information to the model is 0.0005. The other exogenous variables did not yield a statistically significant improvement in any of the verification sets. Integer-valued autoregression of influenza cases provides a strong base forecast model, which is enhanced by the addition of Google Flu Trends confirming the predictive capabilities of search query based syndromic surveillance. This accessible and flexible forecast model can be used by individual medical centers to provide advanced warning of future influenza cases.

  1. NADM Conceptual Model 1.0 -- A Conceptual Model for Geologic Map Information

    USGS Publications Warehouse

    ,

    2004-01-01

    Executive Summary -- The NADM Data Model Design Team was established in 1999 by the North American Geologic Map Data Model Steering Committee (NADMSC) with the purpose of drafting a geologic map data model for consideration as a standard for developing interoperable geologic map-centered databases by state, provincial, and federal geological surveys. The model is designed to be a technology-neutral conceptual model that can form the basis for a web-based interchange format using evolving information technology (e.g., XML, RDF, OWL), and guide implementation of geoscience databases in a common conceptual framework. The intended purpose is to allow geologic information sharing between geologic map data providers and users, independent of local information system implementation. The model emphasizes geoscience concepts and relationships related to information presented on geologic maps. Design has been guided by an informal requirements analysis, documentation of existing databases, technology developments, and other standardization efforts in the geoscience and computer-science communities. A key aspect of the model is the notion that representation of the conceptual framework (ontology) that underlies geologic map data must be part of the model, because this framework changes with time and understanding, and varies between information providers. The top level of the model distinguishes geologic concepts, geologic representation concepts, and metadata. The geologic representation part of the model provides a framework for representing the ontology that underlies geologic map data through a controlled vocabulary, and for establishing the relationships between this vocabulary and a geologic map visualization or portrayal. Top-level geologic classes in the model are Earth material (substance), geologic unit (parts of the Earth), geologic age, geologic structure, fossil, geologic process, geologic relation, and geologic event.

  2. Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.

    PubMed

    Long, Jeffrey D; Loeber, Rolf; Farrington, David P

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.

  3. Evaluation of NASA satellite- and assimilation model-derived long-term daily temperature date over the continental US

    USDA-ARS?s Scientific Manuscript database

    Agricultural research increasingly is expected to provide precise, quantitative information with an explicit geographic coverage. Limited availability of continuous daily meteorological records often constrains efforts to provide such information through integrated use of simulation models, spatial ...

  4. A Thermal-based Two-Source Energy Balance Model for Estimating Evapotranspiration over Complex Canopies

    USDA-ARS?s Scientific Manuscript database

    Land surface temperature (LST) provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition as well as providing useful information for constraining prognostic land surface models. This presentation describes a robust but relatively simple LS...

  5. Consumer Behavior Under Conflicting Information Provided by Interested Parties: Implications for Equilibrium in the Market for Credence Goods.

    PubMed

    Russo, Carlo; Tufi, Eleonora

    2016-01-01

    Incomplete information in food consumption is a relevant topic in agricultural economics. This paper proposes a theoretical model describing consumer behavior, market equilibrium and public intervention in an industry where consumers must rely on the information of interested parties such as producers or associations. We provide simple game theory model showing the link between price competition and the strategic use of information. If information are unverifiable (as in the case of credence attributes) firms may have no incentive to advertise true claims and consumer decisions may be biased. Our model incorporates the opportunistic behavior of self-interested information providers. The result is a model of competition in prices and information finding a potential for market failure and public intervention. In the paper we discuss the efficiency of three possible regulations: banning false claims, subsidizing advertising campaigns, and public statement if favor of true claims. In that context, some recent patents related to both the regulatory compliance in communication and to the reduction of asymmetric information between producers and consumers have been considered. Finally, we found that the efficiency of these policy tools is affected by the reputation of trustworthiness of the firms.

  6. Integrating Research into Decision Making: Providing Examples for an Informal Action Research Model. Research Report No. 83-24.

    ERIC Educational Resources Information Center

    Losak, John; Morris, Cathy

    One promising avenue for increasing the utilization of institutional research data is the informal action research model. While formal action research stresses the involvement of researchers throughout the decision-making process, the informal model stresses participation in the later stages of decision making. Informal action research requires…

  7. A thermal-based remote sensing modeling system for estimating daily evapotranspiration from field to global scales

    USDA-ARS?s Scientific Manuscript database

    Thermal-infrared (TIR) remote sensing of land surface temperature (LST) provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition as well as providing useful information for constraining prognostic land surface models. This presentation d...

  8. Soft sensor for real-time cement fineness estimation.

    PubMed

    Stanišić, Darko; Jorgovanović, Nikola; Popov, Nikola; Čongradac, Velimir

    2015-03-01

    This paper describes the design and implementation of soft sensors to estimate cement fineness. Soft sensors are mathematical models that use available data to provide real-time information on process variables when the information, for whatever reason, is not available by direct measurement. In this application, soft sensors are used to provide information on process variable normally provided by off-line laboratory tests performed at large time intervals. Cement fineness is one of the crucial parameters that define the quality of produced cement. Providing real-time information on cement fineness using soft sensors can overcome limitations and problems that originate from a lack of information between two laboratory tests. The model inputs were selected from candidate process variables using an information theoretic approach. Models based on multi-layer perceptrons were developed, and their ability to estimate cement fineness of laboratory samples was analyzed. Models that had the best performance, and capacity to adopt changes in the cement grinding circuit were selected to implement soft sensors. Soft sensors were tested using data from a continuous cement production to demonstrate their use in real-time fineness estimation. Their performance was highly satisfactory, and the sensors proved to be capable of providing valuable information on cement grinding circuit performance. After successful off-line tests, soft sensors were implemented and installed in the control room of a cement factory. Results on the site confirm results obtained by tests conducted during soft sensor development. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Upper atmosphere research: Reaction rate and optical measurements

    NASA Technical Reports Server (NTRS)

    Stief, L. J.; Allen, J. E., Jr.; Nava, D. F.; Payne, W. A., Jr.

    1990-01-01

    The objective is to provide photochemical, kinetic, and spectroscopic information necessary for photochemical models of the Earth's upper atmosphere and to examine reactions or reactants not presently in the models to either confirm the correctness of their exclusion or provide evidence to justify future inclusion in the models. New initiatives are being taken in technique development (many of them laser based) and in the application of established techniques to address gaps in the photochemical/kinetic data base, as well as to provide increasingly reliable information.

  10. Modeling and Simulation Resource Repository (MSRR)(System Engineering/Integrated M&S Management Approach

    NASA Technical Reports Server (NTRS)

    Milroy, Audrey; Hale, Joe

    2006-01-01

    NASA s Exploration Systems Mission Directorate (ESMD) is implementing a management approach for modeling and simulation (M&S) that will provide decision-makers information on the model s fidelity, credibility, and quality, including the verification, validation and accreditation information. The NASA MSRR will be implemented leveraging M&S industry best practices. This presentation will discuss the requirements that will enable NASA to capture and make available the "meta data" or "simulation biography" data associated with a model. The presentation will also describe the requirements that drive how NASA will collect and document relevant information for models or suites of models in order to facilitate use and reuse of relevant models and provide visibility across NASA organizations and the larger M&S community.

  11. Mapping real-time air pollution health risk for environmental management: Combining mobile and stationary air pollution monitoring with neural network models.

    PubMed

    Adams, Matthew D; Kanaroglou, Pavlos S

    2016-03-01

    Air pollution poses health concerns at the global scale. The challenge of managing air pollution is significant because of the many air pollutants, insufficient funds for monitoring and abatement programs, and political and social challenges in defining policy to limit emissions. Some governments provide citizens with air pollution health risk information to allow them to limit their exposure. However, many regions still have insufficient air pollution monitoring networks to provide real-time mapping. Where available, these risk mapping systems either provide absolute concentration data or the concentrations are used to derive an Air Quality Index, which provides the air pollution risk for a mix of air pollutants with a single value. When risk information is presented as a single value for an entire region it does not inform on the spatial variation within the region. Without an understanding of the local variation residents can only make a partially informed decision when choosing daily activities. The single value is typically provided because of a limited number of active monitoring units in the area. In our work, we overcome this issue by leveraging mobile air pollution monitoring techniques, meteorological information and land use information to map real-time air pollution health risks. We propose an approach that can provide improved health risk information to the public by applying neural network models within a framework that is inspired by land use regression. Mobile air pollution monitoring campaigns were conducted across Hamilton from 2005 to 2013. These mobile air pollution data were modelled with a number of predictor variables that included information on the surrounding land use characteristics, the meteorological conditions, air pollution concentrations from fixed location monitors, and traffic information during the time of collection. Fine particulate matter and nitrogen dioxide were both modelled. During the model fitting process we reserved twenty percent of the data to validate the predictions. The models' performances were measured with a coefficient of determination at 0.78 and 0.34 for PM2.5 and NO2, respectively. We apply a relative importance measure to identify the importance of each variable in the neural network to partially overcome the black box issues of neural network models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  13. Brain activity and cognition: a connection from thermodynamics and information theory.

    PubMed

    Collell, Guillem; Fauquet, Jordi

    2015-01-01

    The connection between brain and mind is an important scientific and philosophical question that we are still far from completely understanding. A crucial point to our work is noticing that thermodynamics provides a convenient framework to model brain activity, whereas cognition can be modeled in information-theoretical terms. In fact, several models have been proposed so far from both approaches. A second critical remark is the existence of deep theoretical connections between thermodynamics and information theory. In fact, some well-known authors claim that the laws of thermodynamics are nothing but principles in information theory. Unlike in physics or chemistry, a formalization of the relationship between information and energy is currently lacking in neuroscience. In this paper we propose a framework to connect physical brain and cognitive models by means of the theoretical connections between information theory and thermodynamics. Ultimately, this article aims at providing further insight on the formal relationship between cognition and neural activity.

  14. Trends and Issues in U.S. Navy Manpower

    DTIC Science & Technology

    1985-01-01

    Planning (ADSTAP) system7, consists of several subsystems and models for planning and managing enlisted manpower, personnel, and training. It was... models to provide information for formulating goals and planning the transition from current inventory to estab- lished objectives 9 Operational...planning models to provide information for formulating operating plans to control the size and quality (ratings or skills and pay grades) of the active-duty

  15. Using conceptual work products of health care to design health IT.

    PubMed

    Berry, Andrew B L; Butler, Keith A; Harrington, Craig; Braxton, Melissa O; Walker, Amy J; Pete, Nikki; Johnson, Trevor; Oberle, Mark W; Haselkorn, Jodie; Paul Nichol, W; Haselkorn, Mark

    2016-02-01

    This paper introduces a new, model-based design method for interactive health information technology (IT) systems. This method extends workflow models with models of conceptual work products. When the health care work being modeled is substantially cognitive, tacit, and complex in nature, graphical workflow models can become too complex to be useful to designers. Conceptual models complement and simplify workflows by providing an explicit specification for the information product they must produce. We illustrate how conceptual work products can be modeled using standard software modeling language, which allows them to provide fundamental requirements for what the workflow must accomplish and the information that a new system should provide. Developers can use these specifications to envision how health IT could enable an effective cognitive strategy as a workflow with precise information requirements. We illustrate the new method with a study conducted in an outpatient multiple sclerosis (MS) clinic. This study shows specifically how the different phases of the method can be carried out, how the method allows for iteration across phases, and how the method generated a health IT design for case management of MS that is efficient and easy to use. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Role of Kinetic Modeling in Biomedical Imaging

    PubMed Central

    Huang, Sung-Cheng

    2009-01-01

    Biomedical imaging can reveal clear 3-dimensional body morphology non-invasively with high spatial resolution. Its efficacy, in both clinical and pre-clinical settings, is enhanced with its capability to provide in vivo functional/biological information in tissue. The role of kinetic modeling in providing biological/functional information in biomedical imaging is described. General characteristics and limitations in extracting biological information are addressed and practical approaches to solve the problems are discussed and illustrated with examples. Some future challenges and opportunities for kinetic modeling to expand the capability of biomedical imaging are also presented. PMID:20640185

  17. Information-geometric measures as robust estimators of connection strengths and external inputs.

    PubMed

    Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi

    2009-08-01

    Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

  18. Educating the patient for health care communication in the age of the world wide web: a qualitative study.

    PubMed

    Woodward-Kron, Robyn; Connor, Melanie; Schulz, Peter J; Elliott, Kristine

    2014-02-01

    Communication skills teaching in medical education has yet to acknowledge the impact of the Internet on physician-patient communication. The authors present a conceptual model showing the variables influencing how and to what extent physicians and patients discuss Internet-sourced health information as part of the consultation with the purpose of educating the patient. A study exploring the role physicians play in patient education mediated through health information available on the Internet provided the foundation for the conceptual model. Twenty-one physicians participated in semistructured interviews between 2011 and 2013. Participants were from Australia and Switzerland, whose citizens demonstrate different degrees of Internet usage and who differ culturally and ethnically. The authors analyzed the interviews thematically and iteratively. The themes as well as their interrelationships informed the components of the conceptual model. The intrinsic elements of the conceptual model are the physician, the patient, and Internet based health information. The extrinsic variables of setting, time, and communication activities as well as the quality, availability, and usability of the Internet-based health information influenced the degree to which physicians engaged with, and were engaged by, their patients about Internet-based health information. The empirically informed model provides a means of understanding the environment, enablers, and constraints of discussing Internet-based health information, as well as the benefits for patients' understanding of their health. It also provides medical educators with a conceptual tool to engage and support physicians in their activities of communicating health information to patients.

  19. Contact-assisted protein structure modeling by global optimization in CASP11.

    PubMed

    Joo, Keehyoung; Joung, InSuk; Cheng, Qianyi; Lee, Sung Jong; Lee, Jooyoung

    2016-09-01

    We have applied the conformational space annealing method to the contact-assisted protein structure modeling in CASP11. For Tp targets, where predicted residue-residue contact information was provided, the contact energy term in the form of the Lorentzian function was implemented together with the physical energy terms used in our template-free modeling of proteins. Although we observed some structural improvement of Tp models over the models predicted without the Tp information, the improvement was not substantial on average. This is partly due to the inaccuracy of the provided contact information, where only about 18% of it was correct. For Ts targets, where the information of ambiguous NOE (Nuclear Overhauser Effect) restraints was provided, we formulated the modeling in terms of the two-tier optimization problem, which covers: (1) the assignment of NOE peaks and (2) the three-dimensional (3D) model generation based on the assigned NOEs. Although solving the problem in a direct manner appears to be intractable at first glance, we demonstrate through CASP11 that remarkably accurate protein 3D modeling is possible by brute force optimization of a relevant energy function. For 19 Ts targets of the average size of 224 residues, generated protein models were of about 3.6 Å Cα atom accuracy. Even greater structural improvement was observed when additional Tc contact information was provided. For 20 out of the total 24 Tc targets, we were able to generate protein structures which were better than the best model from the rest of the CASP11 groups in terms of GDT-TS. Proteins 2016; 84(Suppl 1):189-199. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Model-Driven Development for PDS4 Software and Services

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D. J.; Algermissen, S. S.; Cayanan, M. D.; Joyner, R. S.; Hardman, S. H.; Padams, J. H.

    2018-04-01

    PDS4 data product labels provide the information necessary for processing the referenced digital object. However, significantly more information is available in the PDS4 Information Model. This additional information is made available for use, by both software and services, to configure, promote resiliency, and improve interoperability.

  1. A Management Information System in a Library Environment.

    ERIC Educational Resources Information Center

    Sutton, Michael J.; Black, John B.

    More effective use of diminishing resources was needed to provide the best possible services at the University of Guelph (Ontario, Canada) library. This required the improved decision-making processes of a Library Management Information System (LMIS) to provide systematic information analysis. An information flow model was created, and an…

  2. Extracting local information from crowds through betting markets

    NASA Astrophysics Data System (ADS)

    Weijs, Steven

    2015-04-01

    In this research, a set-up is considered in which users can bet against a forecasting agency to challenge their probabilistic forecasts. From an information theory standpoint, a reward structure is considered that either provides the forecasting agency with better information, paying the successful providers of information for their winning bets, or funds excellent forecasting agencies through users that think they know better. Especially for local forecasts, the approach may help to diagnose model biases and to identify local predictive information that can be incorporated in the models. The challenges and opportunities for implementing such a system in practice are also discussed.

  3. Habitat Suitability Index Models: Pronghorn

    USGS Publications Warehouse

    Allen, Arthur W.; Cook, John G.; Armbruster, Michael J.

    1984-01-01

    This is one of a series of publications that provide information on the habitat requirements of selected fish and wildlife species. Literature describing the relationship between habitat variables related to life requisites and habitat suitability for the pronghorn (Antilocapra americana) are synthesized. These data are subsequently used to develop Habitat Suitability Index (HSI) models. The HSI models are designed to provide information that can be used in impact assessment and habitat management.

  4. Information model for digital exchange of soil-related data - potential modifications on ISO 28258

    NASA Astrophysics Data System (ADS)

    Schulz, Sina; Eberhardt, Einar; Reznik, Tomas

    2017-04-01

    ABSTRACT The International Standard ISO 28258 "Digital exchange of soil-related data" provides an information model that describes the organization of soil data to facilitate data transfer between data producers, holders and users. The data model contains a fixed set of "core" soil feature types, data types and properties, whereas its customization is on the data provider level, e.g. by adding user-specific properties. Rules for encoding these information are given by a customized XML-based format (called "SoilML"). Some technical shortcomings are currently under consideration in the ISO working group. Directly after publication of ISO 28258 in 2013, also several conceptual and implementation issues concerning the information model had been identified, such as renaming of feature types, modification of data types, and enhancement of definitions or addition of super-classes are part of the current revision process. Conceptual changes for the current ISO data model that are compatible with the Australian/New Zealand soil data model ANZSoilML and the EU INSPIRE Data Specifications Soil are also discussed. The concept of a model with a limited set of properties that can be extended by the data provider should remain unaffected. This presentation aims to introduce and comment on the current ISO soil information model and the proposed modifications. Moreover, we want to discuss these adjustments with respect to enhanced applicability of this International Standard.

  5. Antecedent Characteristics of Online Cancer Information Seeking Among Rural Breast Cancer Patients: An Application of the Cognitive-Social Health Information Processing (C-SHIP) Model

    PubMed Central

    Shaw, Bret R.; DuBenske, Lori L.; Han, Jeong Yeob; Cofta-Woerpel, Ludmila; Bush, Nigel; Gustafson, David H.; McTavish, Fiona

    2013-01-01

    Little research has examined the antecedent characteristics of patients most likely to seek online cancer information. This study employs the Cognitive-Social Health Information Processing (C-SHIP) model as a framework to understand what psychosocial characteristics precede online cancer-related information seeking among rural breast cancer patients who often have fewer healthcare providers and limited local support services. Examining 144 patients who were provided free computer hardware, Internet access and training for how to use an Interactive Cancer Communication System, pre-test survey scores indicating patients’ psychosocial status were correlated with specific online cancer information seeking behaviors. Each of the factors specified by the C-SHIP model had significant relationships with online cancer information seeking behaviors with the strongest findings emerging for cancer-relevant encodings and self-construals, cancer-relevant beliefs and expectancies and cancer-relevant self-regulatory competencies and skills. Specifically, patients with more negative appraisals in these domains were more likely to seek out online cancer information. Additionally, antecedent variables associated with the C-SHIP model had more frequent relationships with experiential information as compared to didactic information. This study supports the applicability of the model to discern why people afflicted with cancer may seek online information to cope with their disease. PMID:18569368

  6. Antecedent characteristics of online cancer information seeking among rural breast cancer patients: an application of the Cognitive-Social Health Information Processing (C-SHIP) model.

    PubMed

    Shaw, Bret R; Dubenske, Lori L; Han, Jeong Yeob; Cofta-Woerpel, Ludmila; Bush, Nigel; Gustafson, David H; McTavish, Fiona

    2008-06-01

    Little research has examined the antecedent characteristics of patients most likely to seek online cancer information. This study employs the Cognitive-Social Health Information Processing (C-SHIP) model as a framework to understand what psychosocial characteristics precede online cancer-related information seeking among rural breast cancer patients who often have fewer health care providers and limited local support services. Examining 144 patients who were provided free computer hardware, Internet access, and training for how to use an interactive cancer communication system, pretest survey scores indicating patients' psychosocial status were correlated with specific online cancer information seeking behaviors. Each of the factors specified by the C-SHIP model had significant relationships with online cancer information seeking behaviors, with the strongest findings emerging for cancer-relevant encodings and self-construals, cancer-relevant beliefs and expectancies, and cancer-relevant self-regulatory competencies and skills. Specifically, patients with more negative appraisals in these domains were more likely to seek out online cancer information. Additionally, antecedent variables associated with the C-SHIP model had more frequent relationships with experiential information as compared with to didactic information. This study supports the applicability of the model to discern why people afflicted with cancer may seek online information to cope with their disease.

  7. Life sciences domain analysis model

    PubMed Central

    Freimuth, Robert R; Freund, Elaine T; Schick, Lisa; Sharma, Mukesh K; Stafford, Grace A; Suzek, Baris E; Hernandez, Joyce; Hipp, Jason; Kelley, Jenny M; Rokicki, Konrad; Pan, Sue; Buckler, Andrew; Stokes, Todd H; Fernandez, Anna; Fore, Ian; Buetow, Kenneth H

    2012-01-01

    Objective Meaningful exchange of information is a fundamental challenge in collaborative biomedical research. To help address this, the authors developed the Life Sciences Domain Analysis Model (LS DAM), an information model that provides a framework for communication among domain experts and technical teams developing information systems to support biomedical research. The LS DAM is harmonized with the Biomedical Research Integrated Domain Group (BRIDG) model of protocol-driven clinical research. Together, these models can facilitate data exchange for translational research. Materials and methods The content of the LS DAM was driven by analysis of life sciences and translational research scenarios and the concepts in the model are derived from existing information models, reference models and data exchange formats. The model is represented in the Unified Modeling Language and uses ISO 21090 data types. Results The LS DAM v2.2.1 is comprised of 130 classes and covers several core areas including Experiment, Molecular Biology, Molecular Databases and Specimen. Nearly half of these classes originate from the BRIDG model, emphasizing the semantic harmonization between these models. Validation of the LS DAM against independently derived information models, research scenarios and reference databases supports its general applicability to represent life sciences research. Discussion The LS DAM provides unambiguous definitions for concepts required to describe life sciences research. The processes established to achieve consensus among domain experts will be applied in future iterations and may be broadly applicable to other standardization efforts. Conclusions The LS DAM provides common semantics for life sciences research. Through harmonization with BRIDG, it promotes interoperability in translational science. PMID:22744959

  8. Towards a Ubiquitous User Model for Profile Sharing and Reuse

    PubMed Central

    de Lourdes Martinez-Villaseñor, Maria; Gonzalez-Mendoza, Miguel; Hernandez-Gress, Neil

    2012-01-01

    People interact with systems and applications through several devices and are willing to share information about preferences, interests and characteristics. Social networking profiles, data from advanced sensors attached to personal gadgets, and semantic web technologies such as FOAF and microformats are valuable sources of personal information that could provide a fair understanding of the user, but profile information is scattered over different user models. Some researchers in the ubiquitous user modeling community envision the need to share user model's information from heterogeneous sources. In this paper, we address the syntactic and semantic heterogeneity of user models in order to enable user modeling interoperability. We present a dynamic user profile structure based in Simple Knowledge Organization for the Web (SKOS) to provide knowledge representation for ubiquitous user model. We propose a two-tier matching strategy for concept schemas alignment to enable user modeling interoperability. Our proposal is proved in the application scenario of sharing and reusing data in order to deal with overweight and obesity. PMID:23201995

  9. Conceptual data modeling of wildlife response indicators to ecosystem change in the Arctic

    USGS Publications Warehouse

    Walworth, Dennis; Pearce, John M.

    2015-08-06

    Large research studies are often challenged to effectively expose and document the types of information being collected and the reasons for data collection across what are often a diverse cadre of investigators of differing disciplines. We applied concepts from the field of information or data modeling to the U.S. Geological Survey (USGS) Changing Arctic Ecosystems (CAE) initiative to prototype an application of information modeling. The USGS CAE initiative is collecting information from marine and terrestrial environments in Alaska to identify and understand the links between rapid physical changes in the Arctic and response of wildlife populations to these ecosystem changes. An associated need is to understand how data collection strategies are informing the overall science initiative and facilitating communication of those strategies to a wide audience. We explored the use of conceptual data modeling to provide a method by which to document, describe, and visually communicate both enterprise and study level data; provide a simple means to analyze commonalities and differences in data acquisition strategies between studies; and provide a tool for discussing those strategies among researchers and managers.

  10. A framework for improving the quality of health information on the world-wide-web and bettering public (e-)health: the MedCERTAIN approach.

    PubMed

    Eysenbach, G; Köhler, C; Yihune, G; Lampe, K; Cross, P; Brickley, D

    2001-01-01

    There has been considerable debate about the variable quality of health information on the world-wide-web and its impact on public health. While central authorities to regulate, control, censor, or centrally approve information, in-formation providers or websites are neither realistic nor desirable, public health professionals are interested in making systems available that direct patient streams to the best available information sources. National governments and medical societies have also recognized their responsibility to help users to identify "good quality" information sources. But what constitutes good quality, and how can such a system be implemented in a decentralized and democratic manner? This paper presents a model which combines aspects of consumer education, encouragement of best practices among information providers, self-labeling and external evaluations. The model is currently being implemented and evaluated in the MedCERTAIN project, funded by the European Union under the Action Plan for Safer Use of the Internet. The aim is to develop a technical and organisational infrastructure for a pilot system that allows consumers to access metainformation about web-sites and health information providers, including disclosure information from health providers and opinions of external evaluators. The paper explains the general conceptual framework of the model and presents preliminary experiences including results from an expert consensus meeting, where the framework was discussed.

  11. The application of use case modeling in designing medical imaging information systems.

    PubMed

    Safdari, Reza; Farzi, Jebraeil; Ghazisaeidi, Marjan; Mirzaee, Mahboobeh; Goodini, Azadeh

    2013-01-01

    Introduction. The essay at hand is aimed at examining the application of use case modeling in analyzing and designing information systems to support Medical Imaging services. Methods. The application of use case modeling in analyzing and designing health information systems was examined using electronic databases (Pubmed, Google scholar) resources and the characteristics of the modeling system and its effect on the development and design of the health information systems were analyzed. Results. Analyzing the subject indicated that Provident modeling of health information systems should provide for quick access to many health data resources in a way that patients' data can be used in order to expand distant services and comprehensive Medical Imaging advices. Also these experiences show that progress in the infrastructure development stages through gradual and repeated evolution process of user requirements is stronger and this can lead to a decline in the cycle of requirements engineering process in the design of Medical Imaging information systems. Conclusion. Use case modeling approach can be effective in directing the problems of health and Medical Imaging information systems towards understanding, focusing on the start and analysis, better planning, repetition, and control.

  12. Building Information Modeling (BIM) Roadmap: Supplement 2 - BIM Implementation Plan for Military Construction Projects, Bentley Platform

    DTIC Science & Technology

    2011-01-01

    ER D C TR -0 6- 10 , S up pl em en t 2 Building Information Modeling ( BIM ) Roadmap Supplement 2 – BIM Implementation Plan for Military...release; distribution is unlimited. ERDC TR-06-10, Supplement 2 January 2011 Building Information Modeling ( BIM ) Roadmap Supplement 2 – BIM ...ERDC TR-06-10, Supplement 2 (January 2011) 2 Abstract: Building Information Modeling ( BIM ) technology provides the communities of practice in

  13. Project Shuttle simulation math model coordination catalog, revision 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A catalog is presented of subsystem and environment math models used or planned for space shuttle simulations. The purpose is to facilitate sharing of similar math models between shuttle simulations. It provides information on mach model requirements, formulations, schedules, and contact persons for further information.

  14. Development of a tiered and binned genetic counseling model for informed consent in the era of multiplex testing for cancer susceptibility.

    PubMed

    Bradbury, Angela R; Patrick-Miller, Linda; Long, Jessica; Powers, Jacquelyn; Stopfer, Jill; Forman, Andrea; Rybak, Christina; Mattie, Kristin; Brandt, Amanda; Chambers, Rachelle; Chung, Wendy K; Churpek, Jane; Daly, Mary B; Digiovanni, Laura; Farengo-Clark, Dana; Fetzer, Dominique; Ganschow, Pamela; Grana, Generosa; Gulden, Cassandra; Hall, Michael; Kohler, Lynne; Maxwell, Kara; Merrill, Shana; Montgomery, Susan; Mueller, Rebecca; Nielsen, Sarah; Olopade, Olufunmilayo; Rainey, Kimberly; Seelaus, Christina; Nathanson, Katherine L; Domchek, Susan M

    2015-06-01

    Multiplex genetic testing, including both moderate- and high-penetrance genes for cancer susceptibility, is associated with greater uncertainty than traditional testing, presenting challenges to informed consent and genetic counseling. We sought to develop a new model for informed consent and genetic counseling for four ongoing studies. Drawing from professional guidelines, literature, conceptual frameworks, and clinical experience, a multidisciplinary group developed a tiered-binned genetic counseling approach proposed to facilitate informed consent and improve outcomes of cancer susceptibility multiplex testing. In this model, tier 1 "indispensable" information is presented to all patients. More specific tier 2 information is provided to support variable informational needs among diverse patient populations. Clinically relevant information is "binned" into groups to minimize information overload, support informed decision making, and facilitate adaptive responses to testing. Seven essential elements of informed consent are provided to address the unique limitations, risks, and uncertainties of multiplex testing. A tiered-binned model for informed consent and genetic counseling has the potential to address the challenges of multiplex testing for cancer susceptibility and to support informed decision making and adaptive responses to testing. Future prospective studies including patient-reported outcomes are needed to inform how to best incorporate multiplex testing for cancer susceptibility into clinical practice.Genet Med 17 6, 485-492.

  15. Modeling regional-scale wildland fire emissions with the wildland fire emissions information system

    Treesearch

    Nancy H.F. French; Donald McKenzie; Tyler Erickson; Benjamin Koziol; Michael Billmire; K. Endsley; Naomi K.Y. Scheinerman; Liza Jenkins; Mary E. Miller; Roger Ottmar; Susan Prichard

    2014-01-01

    As carbon modeling tools become more comprehensive, spatial data are needed to improve quantitative maps of carbon emissions from fire. The Wildland Fire Emissions Information System (WFEIS) provides mapped estimates of carbon emissions from historical forest fires in the United States through a web browser. WFEIS improves access to data and provides a consistent...

  16. Preliminary description of the area navigation software for a microcomputer-based Loran-C receiver

    NASA Technical Reports Server (NTRS)

    Oguri, F.

    1983-01-01

    The development of new software implementation of this software on a microcomputer (MOS 6502) to provide high quality navigation information is described. This software development provides Area/Route Navigation (RNAV) information from Time Differences (TDs) in raw form using an elliptical Earth model and a spherical model. The software is prepared for the microcomputer based Loran-C receiver. To compute navigation infomation, a (MOS 6502) microcomputer and a mathematical chip (AM 9511A) were combined with the Loran-C receiver. Final data reveals that this software does indeed provide accurate information with reasonable execution times.

  17. Using Instrumental Variable (IV) Tests to Evaluate Model Specification in Latent Variable Structural Equation Models*

    PubMed Central

    Kirby, James B.; Bollen, Kenneth A.

    2009-01-01

    Structural Equation Modeling with latent variables (SEM) is a powerful tool for social and behavioral scientists, combining many of the strengths of psychometrics and econometrics into a single framework. The most common estimator for SEM is the full-information maximum likelihood estimator (ML), but there is continuing interest in limited information estimators because of their distributional robustness and their greater resistance to structural specification errors. However, the literature discussing model fit for limited information estimators for latent variable models is sparse compared to that for full information estimators. We address this shortcoming by providing several specification tests based on the 2SLS estimator for latent variable structural equation models developed by Bollen (1996). We explain how these tests can be used to not only identify a misspecified model, but to help diagnose the source of misspecification within a model. We present and discuss results from a Monte Carlo experiment designed to evaluate the finite sample properties of these tests. Our findings suggest that the 2SLS tests successfully identify most misspecified models, even those with modest misspecification, and that they provide researchers with information that can help diagnose the source of misspecification. PMID:20419054

  18. Queues with Choice via Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth

    Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.

  19. Wyoming greater sage-grouse habitat prioritization: A collection of multi-scale seasonal models and geographic information systems land management tools

    USGS Publications Warehouse

    O'Donnell, Michael S.; Aldridge, Cameron L.; Doherty, Kevin E.; Fedy, Bradley C.

    2015-01-01

    We deliver all products described herein as online geographic information system data for visualization and downloading. We outline the data properties for each model and their data inputs, describe the process of selecting appropriate data products for multifarious applications, describe all data products and software, provide newly derived model composites, and discuss how land managers may use the models to inform future sage-grouse studies and potentially refine conservation efforts. The models, software tools, and associated opportunities for novel applications of these products should provide a suite of additional, but not exclusive, tools for assessing Wyoming Greater Sage-grouse habitats, which land managers, conservationists, and scientists can apply to myriad applications.

  20. Variable-Length Computerized Adaptive Testing Using the Higher Order DINA Model

    ERIC Educational Resources Information Center

    Hsu, Chia-Ling; Wang, Wen-Chung

    2015-01-01

    Cognitive diagnosis models provide profile information about a set of latent binary attributes, whereas item response models yield a summary report on a latent continuous trait. To utilize the advantages of both models, higher order cognitive diagnosis models were developed in which information about both latent binary attributes and latent…

  1. INDOOR AIR QUALITY MODELING (CHAPTER 58)

    EPA Science Inventory

    The chapter discussses indoor air quality (IAQ) modeling. Such modeling provides a way to investigate many IAQ problems without the expense of large field experiments. Where experiments are planned, IAQ models can be used to help design experiments by providing information on exp...

  2. Parametric Modelling of As-Built Beam Framed Structure in Bim Environment

    NASA Astrophysics Data System (ADS)

    Yang, X.; Koehl, M.; Grussenmeyer, P.

    2017-02-01

    A complete documentation and conservation of a historic timber roof requires the integration of geometry modelling, attributional and dynamic information management and results of structural analysis. Recently developed as-built Building Information Modelling (BIM) technique has the potential to provide a uniform platform, which provides possibility to integrate the traditional geometry modelling, parametric elements management and structural analysis together. The main objective of the project presented in this paper is to develop a parametric modelling tool for a timber roof structure whose elements are leaning and crossing beam frame. Since Autodesk Revit, as the typical BIM software, provides the platform for parametric modelling and information management, an API plugin, able to automatically create the parametric beam elements and link them together with strict relationship, was developed. The plugin under development is introduced in the paper, which can obtain the parametric beam model via Autodesk Revit API from total station points and terrestrial laser scanning data. The results show the potential of automatizing the parametric modelling by interactive API development in BIM environment. It also integrates the separate data processing and different platforms into the uniform Revit software.

  3. Managing Approach Plate Information Study (MAPLIST): An Information Requirements Analysis of Approach Chart Use

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Jonnson, Jon E.; Barry, John S.

    1996-01-01

    Adequately presenting all necessary information on an approach chart represents a challenge for cartographers. Since many tasks associated with using approach charts are cognitive (e.g., planning the approach and monitoring its progress), and since the characteristic of a successful interface is one that conforms to the users' mental models, understanding pilots' underlying models of approach chart information would greatly assist cartographers. To provide such information, a new methodology was developed for this study that enhances traditional information requirements analyses by combining psychometric scaling techniques with a simulation task to provide quantifiable links between pilots' cognitive representations of approach information and their use of approach information. Results of this study should augment previous information requirements analyses by identifying what information is acquired, when it is acquired, and what presentation concepts might facilitate its efficient use by better matching the pilots' cognitive model of the information. The primary finding in this study indicated that pilots mentally organize approach chart information into ten primary categories: communications, geography, validation, obstructions, navigation, missed approach, final items, other runways, visibility requirement, and navigation aids. These similarity categories were found to underlie the pilots' information acquisitions, other mental models, and higher level cognitive processes that are used to accomplish their approach and landing tasks.

  4. Brain activity and cognition: a connection from thermodynamics and information theory

    PubMed Central

    Collell, Guillem; Fauquet, Jordi

    2015-01-01

    The connection between brain and mind is an important scientific and philosophical question that we are still far from completely understanding. A crucial point to our work is noticing that thermodynamics provides a convenient framework to model brain activity, whereas cognition can be modeled in information-theoretical terms. In fact, several models have been proposed so far from both approaches. A second critical remark is the existence of deep theoretical connections between thermodynamics and information theory. In fact, some well-known authors claim that the laws of thermodynamics are nothing but principles in information theory. Unlike in physics or chemistry, a formalization of the relationship between information and energy is currently lacking in neuroscience. In this paper we propose a framework to connect physical brain and cognitive models by means of the theoretical connections between information theory and thermodynamics. Ultimately, this article aims at providing further insight on the formal relationship between cognition and neural activity. PMID:26136709

  5. A Model Performance

    ERIC Educational Resources Information Center

    Thornton, Bradley D.; Smalley, Robert A.

    2008-01-01

    Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…

  6. Supporting the Educational Needs of Students with Orthopedic Impairments.

    ERIC Educational Resources Information Center

    Heller, Kathryn Wolff; Swinehart-Jones, Dawn

    2003-01-01

    This article provides information on orthopedic impairments and the unique knowledge and skills required to provide these students with an appropriate education. Information on current practice is provided, as well as training and technical assistance models that can be used to help provide teachers with the necessary training. (Contains…

  7. Emerging In Vitro Liver Technologies for Drug Metabolism and Inter-Organ Interactions

    PubMed Central

    Bale, Shyam Sundhar; Moore, Laura

    2016-01-01

    In vitro liver models provide essential information for evaluating drug metabolism, metabolite formation, and hepatotoxicity. Interfacing liver models with other organ models could provide insights into the desirable as well as unintended systemic side effects of therapeutic agents and their metabolites. Such information is invaluable for drug screening processes particularly in the context of secondary organ toxicity. While interfacing of liver models with other organ models has been achieved, platforms that effectively provide human-relevant precise information are needed. In this concise review, we discuss the current state-of-the-art of liver-based multiorgan cell culture platforms primarily from a drug and metabolite perspective, and highlight the importance of media-to-cell ratio in interfacing liver models with other organ models. In addition, we briefly discuss issues related to development of optimal liver models that include recent advances in hepatic cell lines, stem cells, and challenges associated with primary hepatocyte-based liver models. Liver-based multiorgan models that achieve physiologically relevant coupling of different organ models can have a broad impact in evaluating drug efficacy and toxicity, as well as mechanistic investigation of human-relevant disease conditions. PMID:27049038

  8. Online Cancer Information Seeking: Applying and Extending the Comprehensive Model of Information Seeking.

    PubMed

    Van Stee, Stephanie K; Yang, Qinghua

    2017-10-30

    This study applied the comprehensive model of information seeking (CMIS) to online cancer information and extended the model by incorporating an exogenous variable: interest in online health information exchange with health providers. A nationally representative sample from the Health Information National Trends Survey 4 Cycle 4 was analyzed to examine the extended CMIS in predicting online cancer information seeking. Findings from a structural equation model supported most of the hypotheses derived from the CMIS, as well as the extension of the model related to interest in online health information exchange. In particular, socioeconomic status, beliefs, and interest in online health information exchange predicted utility. Utility, in turn, predicted online cancer information seeking, as did information-carrier characteristics. An unexpected but important finding from the study was the significant, direct relationship between cancer worry and online cancer information seeking. Theoretical and practical implications are discussed.

  9. Convoys of care: Theorizing intersections of formal and informal care

    PubMed Central

    Kemp, Candace L.; Ball, Mary M.; Perkins, Molly M.

    2013-01-01

    Although most care to frail elders is provided informally, much of this care is paired with formal care services. Yet, common approaches to conceptualizing the formal–informal intersection often are static, do not consider self-care, and typically do not account for multi-level influences. In response, we introduce the “convoy of care” model as an alternative way to conceptualize the intersection and to theorize connections between care convoy properties and caregiver and recipient outcomes. The model draws on Kahn and Antonucci's (1980) convoy model of social relations, expanding it to include both formal and informal care providers and also incorporates theoretical and conceptual threads from life course, feminist gerontology, social ecology, and symbolic interactionist perspectives. This article synthesizes theoretical and empirical knowledge and demonstrates the convoy of care model in an increasingly popular long-term care setting, assisted living. We conceptualize care convoys as dynamic, evolving, person- and family-specific, and influenced by a host of multi-level factors. Care convoys have implications for older adults’ quality of care and ability to age in place, for job satisfaction and retention among formal caregivers, and for informal caregiver burden. The model moves beyond existing conceptual work to provide a comprehensive, multi-level, multi-factor framework that can be used to inform future research, including research in other care settings, and to spark further theoretical development. PMID:23273553

  10. An Early Model for Value and Sustainability in Health Information Exchanges: Qualitative Study

    PubMed Central

    2018-01-01

    Background The primary value relative to health information exchange has been seen in terms of cost savings relative to laboratory and radiology testing, emergency department expenditures, and admissions. However, models are needed to statistically quantify value and sustainability and better understand the dependent and mediating factors that contribute to value and sustainability. Objective The purpose of this study was to provide a basis for early model development for health information exchange value and sustainability. Methods A qualitative study was conducted with 21 interviews of eHealth Exchange participants across 10 organizations. Using a grounded theory approach and 3.0 as a relative frequency threshold, 5 main categories and 16 subcategories emerged. Results This study identifies 3 core current perceived value factors and 5 potential perceived value factors—how interviewees predict health information exchanges may evolve as there are more participants. These value factors were used as the foundation for early model development for sustainability of health information exchange. Conclusions Using the value factors from the interviews, the study provides the basis for early model development for health information exchange value and sustainability. This basis includes factors from the research: fostering consumer engagement; establishing a provider directory; quantifying use, cost, and clinical outcomes; ensuring data integrity through patient matching; and increasing awareness, usefulness, interoperability, and sustainability of eHealth Exchange. PMID:29712623

  11. An Early Model for Value and Sustainability in Health Information Exchanges: Qualitative Study.

    PubMed

    Feldman, Sue S

    2018-04-30

    The primary value relative to health information exchange has been seen in terms of cost savings relative to laboratory and radiology testing, emergency department expenditures, and admissions. However, models are needed to statistically quantify value and sustainability and better understand the dependent and mediating factors that contribute to value and sustainability. The purpose of this study was to provide a basis for early model development for health information exchange value and sustainability. A qualitative study was conducted with 21 interviews of eHealth Exchange participants across 10 organizations. Using a grounded theory approach and 3.0 as a relative frequency threshold, 5 main categories and 16 subcategories emerged. This study identifies 3 core current perceived value factors and 5 potential perceived value factors-how interviewees predict health information exchanges may evolve as there are more participants. These value factors were used as the foundation for early model development for sustainability of health information exchange. Using the value factors from the interviews, the study provides the basis for early model development for health information exchange value and sustainability. This basis includes factors from the research: fostering consumer engagement; establishing a provider directory; quantifying use, cost, and clinical outcomes; ensuring data integrity through patient matching; and increasing awareness, usefulness, interoperability, and sustainability of eHealth Exchange. ©Sue S Feldman. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 30.04.2018.

  12. Compassion Fatigue: An Application of the Concept to Informal Caregivers of Family Members with Dementia

    PubMed Central

    Day, Jennifer R.; Anderson, Ruth A.

    2011-01-01

    Introduction. Compassion fatigue is a concept used with increasing frequency in the nursing literature. The objective of this paper is to identify common themes across the literature and to apply these themes, and an existing model of compassion fatigue, to informal caregivers for family members with dementia. Findings. Caregivers for family members with dementia may be at risk for developing compassion fatigue. The model of compassion fatigue provides an informative framework for understanding compassion fatigue in the informal caregiver population. Limitations of the model when applied to this population were identified as traumatic memories and the emotional relationship between parent and child, suggesting areas for future research. Conclusions. Research is needed to better understand the impact of compassion fatigue on informal caregivers through qualitative interviews, to identify informal caregivers at risk for compassion fatigue, and to provide an empirical basis for developing nursing interventions for caregivers experiencing compassion fatigue. PMID:22229086

  13. Informed consent in direct-to-consumer personal genome testing: the outline of a model between specific and generic consent.

    PubMed

    Bunnik, Eline M; Janssens, A Cecile J W; Schermer, Maartje H N

    2014-09-01

    Broad genome-wide testing is increasingly finding its way to the public through the online direct-to-consumer marketing of so-called personal genome tests. Personal genome tests estimate genetic susceptibilities to multiple diseases and other phenotypic traits simultaneously. Providers commonly make use of Terms of Service agreements rather than informed consent procedures. However, to protect consumers from the potential physical, psychological and social harms associated with personal genome testing and to promote autonomous decision-making with regard to the testing offer, we argue that current practices of information provision are insufficient and that there is a place--and a need--for informed consent in personal genome testing, also when it is offered commercially. The increasing quantity, complexity and diversity of most testing offers, however, pose challenges for information provision and informed consent. Both specific and generic models for informed consent fail to meet its moral aims when applied to personal genome testing. Consumers should be enabled to know the limitations, risks and implications of personal genome testing and should be given control over the genetic information they do or do not wish to obtain. We present the outline of a new model for informed consent which can meet both the norm of providing sufficient information and the norm of providing understandable information. The model can be used for personal genome testing, but will also be applicable to other, future forms of broad genetic testing or screening in commercial and clinical settings. © 2012 John Wiley & Sons Ltd.

  14. Describing and Modeling Workflow and Information Flow in Chronic Disease Care

    PubMed Central

    Unertl, Kim M.; Weinger, Matthew B.; Johnson, Kevin B.; Lorenzi, Nancy M.

    2009-01-01

    Objectives The goal of the study was to develop an in-depth understanding of work practices, workflow, and information flow in chronic disease care, to facilitate development of context-appropriate informatics tools. Design The study was conducted over a 10-month period in three ambulatory clinics providing chronic disease care. The authors iteratively collected data using direct observation and semi-structured interviews. Measurements The authors observed all aspects of care in three different chronic disease clinics for over 150 hours, including 157 patient-provider interactions. Observation focused on interactions among people, processes, and technology. Observation data were analyzed through an open coding approach. The authors then developed models of workflow and information flow using Hierarchical Task Analysis and Soft Systems Methodology. The authors also conducted nine semi-structured interviews to confirm and refine the models. Results The study had three primary outcomes: models of workflow for each clinic, models of information flow for each clinic, and an in-depth description of work practices and the role of health information technology (HIT) in the clinics. The authors identified gaps between the existing HIT functionality and the needs of chronic disease providers. Conclusions In response to the analysis of workflow and information flow, the authors developed ten guidelines for design of HIT to support chronic disease care, including recommendations to pursue modular approaches to design that would support disease-specific needs. The study demonstrates the importance of evaluating workflow and information flow in HIT design and implementation. PMID:19717802

  15. Modeling individual tree survial

    Treesearch

    Quang V. Cao

    2016-01-01

    Information provided by growth and yield models is the basis for forest managers to make decisions on how to manage their forests. Among different types of growth models, whole-stand models offer predictions at stand level, whereas individual-tree models give detailed information at tree level. The well-known logistic regression is commonly used to predict tree...

  16. Word of Mouth : An Agent-based Approach to Predictability of Stock Prices

    NASA Astrophysics Data System (ADS)

    Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko

    This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.

  17. Information Processing and Risk Perception: An Adaptation of the Heuristic-Systematic Model.

    ERIC Educational Resources Information Center

    Trumbo, Craig W.

    2002-01-01

    Describes heuristic-systematic information-processing model and risk perception--the two major conceptual areas of the analysis. Discusses the proposed model, describing the context of the data collections (public health communication involving cancer epidemiology) and providing the results of a set of three replications using the proposed model.…

  18. The Research on Informal Learning Model of College Students Based on SNS and Case Study

    NASA Astrophysics Data System (ADS)

    Lu, Peng; Cong, Xiao; Bi, Fangyan; Zhou, Dongdai

    2017-03-01

    With the rapid development of network technology, informal learning based on online become the main way for college students to learn a variety of subject knowledge. The favor to the SNS community of students and the characteristics of SNS itself provide a good opportunity for the informal learning of college students. This research first analyzes the related research of the informal learning and SNS, next, discusses the characteristics of informal learning and theoretical basis. Then, it proposed an informal learning model of college students based on SNS according to the support role of SNS to the informal learning of students. Finally, according to the theoretical model and the principles proposed in this study, using the Elgg and related tools which is the open source SNS program to achieve the informal learning community. This research is trying to overcome issues such as the lack of social realism, interactivity, resource transfer mode in the current network informal learning communities, so as to provide a new way of informal learning for college students.

  19. An Information Processing Perspective on Divergence and Convergence in Collaborative Learning

    ERIC Educational Resources Information Center

    Jorczak, Robert L.

    2011-01-01

    This paper presents a model of collaborative learning that takes an information processing perspective of learning by social interaction. The collaborative information processing model provides a theoretical basis for understanding learning principles associated with social interaction and explains why peer-to-peer discussion is potentially more…

  20. Lecturing and Loving It: Applying the Information-Processing Model.

    ERIC Educational Resources Information Center

    Parker, Jonathan K.

    1993-01-01

    Discusses the benefits of lecturing, when done properly, in high schools. Describes the positive attributes of effective lecturers. Provides a human information-processing model applicable to the task of lecturing to students. (HB)

  1. System and method of designing models in a feedback loop

    DOEpatents

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  2. Quality Inspection and Analysis of Three-Dimensional Geographic Information Model Based on Oblique Photogrammetry

    NASA Astrophysics Data System (ADS)

    Dong, S.; Yan, Q.; Xu, Y.; Bai, J.

    2018-04-01

    In order to promote the construction of digital geo-spatial framework in China and accelerate the construction of informatization mapping system, three-dimensional geographic information model emerged. The three-dimensional geographic information model based on oblique photogrammetry technology has higher accuracy, shorter period and lower cost than traditional methods, and can more directly reflect the elevation, position and appearance of the features. At this stage, the technology of producing three-dimensional geographic information models based on oblique photogrammetry technology is rapidly developing. The market demand and model results have been emerged in a large amount, and the related quality inspection needs are also getting larger and larger. Through the study of relevant literature, it is found that there are a lot of researches on the basic principles and technical characteristics of this technology, and relatively few studies on quality inspection and analysis. On the basis of summarizing the basic principle and technical characteristics of oblique photogrammetry technology, this paper introduces the inspection contents and inspection methods of three-dimensional geographic information model based on oblique photogrammetry technology. Combined with the actual inspection work, this paper summarizes the quality problems of three-dimensional geographic information model based on oblique photogrammetry technology, analyzes the causes of the problems and puts forward the quality control measures. It provides technical guidance for the quality inspection of three-dimensional geographic information model data products based on oblique photogrammetry technology in China and provides technical support for the vigorous development of three-dimensional geographic information model based on oblique photogrammetry technology.

  3. Land Surface Data Assimilation

    NASA Astrophysics Data System (ADS)

    Houser, P. R.

    2012-12-01

    Information about land surface water, energy and carbon conditions is of critical importance to real-world applications such as agricultural production, water resource management, flood prediction, water supply, weather and climate forecasting, and environmental preservation. While ground-based observational networks are improving, the only practical way to observe these land surface states on continental to global scales is via satellites. Remote sensing can make spatially comprehensive measurements of various components of the terrestrial system, but it cannot provide information on the entire system (e.g. evaporation), and the observations represent only an instant in time. Land surface process models may be used to predict temporal and spatial terrestrial dynamics, but these predictions are often poor, due to model initialization, parameter and forcing, and physics errors. Therefore, an attractive prospect is to combine the strengths of land surface models and observations (and minimize the weaknesses) to provide a superior terrestrial state estimate. This is the goal of land surface data assimilation. Data Assimilation combines observations into a dynamical model, using the model's equations to provide time continuity and coupling between the estimated fields. Land surface data assimilation aims to utilize both our land surface process knowledge, as embodied in a land surface model, and information that can be gained from observations. Both model predictions and observations are imperfect and we wish to use both synergistically to obtain a more accurate result. Moreover, both contain different kinds of information, that when used together, provide an accuracy level that cannot be obtained individually. Model biases can be mitigated using a complementary calibration and parameterization process. Limited point measurements are often used to calibrate the model(s) and validate the assimilation results. This presentation will provide a brief background on land surface observation, modeling and data assimilation, followed by a discussion of various hydrologic data assimilation challenges, and finally conclude with several land surface data assimilation case studies.

  4. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  5. A Community Data Model for Hydrologic Observations

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.

    2006-12-01

    The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis across different observation types.

  6. A technique for displaying flight information in the field of view of binoculars for use by the pilots of radio controlled models

    NASA Technical Reports Server (NTRS)

    Fuller, H. V.

    1974-01-01

    A display system was developed to provide flight information to the ground based pilots of radio controlled models used in flight research programs. The display system utilizes data received by telemetry from the model, and presents the information numerically in the field of view of the binoculars used by the pilots.

  7. Enhancing Access to Patient Education Information: A Pilot Usability Study

    PubMed Central

    Beaudoin, Denise E.; Rocha, Roberto A.; Tse, Tony

    2005-01-01

    Health care organizations are developing Web-based portals to provide patient access to personal health information and enhance patient-provider communication. This pilot study investigates two navigation models (“serial” and “menu-driven”) for improving access to education materials available through a portal. There was a trend toward greater user satisfaction with the menu-driven model. Model preference was influenced by frequency of Web use. Results should aid in the improvement of existing portals and in the development of new ones. PMID:16779179

  8. Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera

    NASA Astrophysics Data System (ADS)

    Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.

    2011-12-01

    This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.

  9. A predictive model of avian natal dispersal distance provides prior information for investigating response to landscape change.

    PubMed

    Garrard, Georgia E; McCarthy, Michael A; Vesk, Peter A; Radford, James Q; Bennett, Andrew F

    2012-01-01

    1. Informative Bayesian priors can improve the precision of estimates in ecological studies or estimate parameters for which little or no information is available. While Bayesian analyses are becoming more popular in ecology, the use of strongly informative priors remains rare, perhaps because examples of informative priors are not readily available in the published literature. 2. Dispersal distance is an important ecological parameter, but is difficult to measure and estimates are scarce. General models that provide informative prior estimates of dispersal distances will therefore be valuable. 3. Using a world-wide data set on birds, we develop a predictive model of median natal dispersal distance that includes body mass, wingspan, sex and feeding guild. This model predicts median dispersal distance well when using the fitted data and an independent test data set, explaining up to 53% of the variation. 4. Using this model, we predict a priori estimates of median dispersal distance for 57 woodland-dependent bird species in northern Victoria, Australia. These estimates are then used to investigate the relationship between dispersal ability and vulnerability to landscape-scale changes in habitat cover and fragmentation. 5. We find evidence that woodland bird species with poor predicted dispersal ability are more vulnerable to habitat fragmentation than those species with longer predicted dispersal distances, thus improving the understanding of this important phenomenon. 6. The value of constructing informative priors from existing information is also demonstrated. When used as informative priors for four example species, predicted dispersal distances reduced the 95% credible intervals of posterior estimates of dispersal distance by 8-19%. Further, should we have wished to collect information on avian dispersal distances and relate it to species' responses to habitat loss and fragmentation, data from 221 individuals across 57 species would have been required to obtain estimates with the same precision as those provided by the general model. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.

  10. Implementation of a combined algorithm designed to increase the reliability of information systems: simulation modeling

    NASA Astrophysics Data System (ADS)

    Popov, A.; Zolotarev, V.; Bychkov, S.

    2016-11-01

    This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.

  11. COP21 climate negotiators' responses to climate model forecasts

    NASA Astrophysics Data System (ADS)

    Bosetti, Valentina; Weber, Elke; Berger, Loïc; Budescu, David V.; Liu, Ning; Tavoni, Massimo

    2017-02-01

    Policymakers involved in climate change negotiations are key users of climate science. It is therefore vital to understand how to communicate scientific information most effectively to this group. We tested how a unique sample of policymakers and negotiators at the Paris COP21 conference update their beliefs on year 2100 global mean temperature increases in response to a statistical summary of climate models' forecasts. We randomized the way information was provided across participants using three different formats similar to those used in Intergovernmental Panel on Climate Change reports. In spite of having received all available relevant scientific information, policymakers adopted such information very conservatively, assigning it less weight than their own prior beliefs. However, providing individual model estimates in addition to the statistical range was more effective in mitigating such inertia. The experiment was repeated with a population of European MBA students who, despite starting from similar priors, reported conditional probabilities closer to the provided models' forecasts than policymakers. There was also no effect of presentation format in the MBA sample. These results highlight the importance of testing visualization tools directly on the population of interest.

  12. Z39.50 and GILS model. [Government Information Locator Service

    NASA Technical Reports Server (NTRS)

    Christian, Eliot

    1994-01-01

    The Government Information Locator System (GILS) is a component of the National Information Infrastructure (NII) which provides electronic access to sources of publicly accessible information maintained throughout the Federal Government. GILS is an internetworking information resource that identifies other information resources, describes the information available in the referenced resources, and provides assistance in how to obtain the information either directly or through intermediaries. The GILS core content which references each Federal information system holding publicly accessible data or information is described in terms of mandatory and optional core elements.

  13. A Multi-Level Model of Information Seeking in the Clinical Domain

    PubMed Central

    Hung, Peter W.; Johnson, Stephen B.; Kaufman, David R.; Mendonça, Eneida A.

    2008-01-01

    Objective: Clinicians often have difficulty translating information needs into effective search strategies to find appropriate answers. Information retrieval systems employing an intelligent search agent that generates adaptive search strategies based on human search expertise could be helpful in meeting clinician information needs. A prerequisite for creating such systems is an information seeking model that facilitates the representation of human search expertise. The purpose of developing such a model is to provide guidance to information seeking system development and to shape an empirical research program. Design: The information seeking process was modeled as a complex problem-solving activity. After considering how similarly complex activities had been modeled in other domains, we determined that modeling context-initiated information seeking across multiple problem spaces allows the abstraction of search knowledge into functionally consistent layers. The knowledge layers were identified in the information science literature and validated through our observations of searches performed by health science librarians. Results: A hierarchical multi-level model of context-initiated information seeking is proposed. Each level represents (1) a problem space that is traversed during the online search process, and (2) a distinct layer of knowledge that is required to execute a successful search. Grand strategy determines what information resources will be searched, for what purpose, and in what order. The strategy level represents an overall approach for searching a single resource. Tactics are individual moves made to further a strategy. Operations are mappings of abstract intentions to information resource-specific concrete input. Assessment is the basis of interaction within the strategic hierarchy, influencing the direction of the search. Conclusion: The described multi-level model provides a framework for future research and the foundation for development of an automated information retrieval system that uses an intelligent search agent to bridge clinician information needs and human search expertise. PMID:18006383

  14. Moral judgment as information processing: an integrative review.

    PubMed

    Guglielmo, Steve

    2015-01-01

    How do humans make moral judgments about others' behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment.

  15. Moral judgment as information processing: an integrative review

    PubMed Central

    Guglielmo, Steve

    2015-01-01

    How do humans make moral judgments about others’ behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment. PMID:26579022

  16. Sharing Service Resource Information for Application Integration in a Virtual Enterprise - Modeling the Communication Protocol for Exchanging Service Resource Information

    NASA Astrophysics Data System (ADS)

    Yamada, Hiroshi; Kawaguchi, Akira

    Grid computing and web service technologies enable us to use networked resources in a coordinated manner. An integrated service is made of individual services running on coordinated resources. In order to achieve such coordinated services autonomously, the initiator of a coordinated service needs to know detailed service resource information. This information ranges from static attributes like the IP address of the application server to highly dynamic ones like the CPU load. The most famous wide-area service discovery mechanism based on names is DNS. Its hierarchical tree organization and caching methods take advantage of the static information managed. However, in order to integrate business applications in a virtual enterprise, we need a discovery mechanism to search for the optimal resources based on the given a set of criteria (search keys). In this paper, we propose a communication protocol for exchanging service resource information among wide-area systems. We introduce the concept of the service domain that consists of service providers managed under the same management policy. This concept of the service domain is similar to that for autonomous systems (ASs). In each service domain, the service information provider manages the service resource information of service providers that exist in this service domain. The service resource information provider exchanges this information with other service resource information providers that belong to the different service domains. We also verified the protocol's behavior and effectiveness using a simulation model developed for proposed protocol.

  17. A Model for a Health Career Information Center.

    ERIC Educational Resources Information Center

    Bruhn, John G.; And Others

    1980-01-01

    One part of a model health career information center was a toll-free health careers hotline which provided information to high school and college students, parents, counselors, and teachers. Evaluation of the hotline indicates that it fills a need, is considered useful by callers, and is of relatively small cost. (Author/CT)

  18. Telecommunications Information Network: A Model for On-Demand Transfer of Medical Information. Final Report.

    ERIC Educational Resources Information Center

    Lorenzi, Nancy M.; And Others

    This report summarizes the third phase of the Telecommunications Information Network (TIN), which provides a telecommunications link between four remote southwest Ohio hospitals and the University of Cincinnati Medical Center, thereby reducing the isolation of healthcare providers at the remote hospitals. A description of the system explains the…

  19. Information Management for Unmanned Systems: Combining DL-Reasoning with Publish/Subscribe

    NASA Astrophysics Data System (ADS)

    Moser, Herwig; Reichelt, Toni; Oswald, Norbert; Förster, Stefan

    Sharing capabilities and information between collaborating entities by using modem information- and communication-technology is a core principle in complex distributed civil or military mission scenarios. Previous work proved the suitability of Service-oriented Architectures for modelling and sharing the participating entities' capabilities. Albeit providing a satisfactory model for capabilities sharing, pure service-orientation curtails expressiveness for information exchange as opposed to dedicated data-centric communication principles. In this paper we introduce an Information Management System which combines OWL-Ontologies and automated reasoning with Publish/Subscribe-Systems, providing for a shared but decoupled data model. While confirming existing related research results, we emphasise the novel application and lack of practical experience of using Semantic Web technologies in areas other than originally intended. That is, aiding decision support and software design in the context of a mission scenario for an unmanned system. Experiments within a complex simulation environment show the immediate benefits of a semantic information-management and -dissemination platform: Clear separation of concerns in code and data model, increased service re-usability and extensibility as well as regulation of data flow and respective system behaviour through declarative rules.

  20. Modeling dispersion of traffic-related pollutants in the NEXUS health study

    EPA Science Inventory

    Dispersion modeling tools have traditionally provided critical information for air quality management decisions, but have been used recently to provide exposure estimates to support health studies. However, these models can be challenging to implement, particularly in near-road s...

  1. Privacy-Preserving Accountable Accuracy Management Systems (PAAMS)

    NASA Astrophysics Data System (ADS)

    Thomas, Roshan K.; Sandhu, Ravi; Bertino, Elisa; Arpinar, Budak; Xu, Shouhuai

    We argue for the design of “Privacy-preserving Accountable Accuracy Management Systems (PAAMS)”. The designs of such systems recognize from the onset that accuracy, accountability, and privacy management are intertwined. As such, these systems have to dynamically manage the tradeoffs between these (often conflicting) objectives. For example, accuracy in such systems can be improved by providing better accountability links between structured and unstructured information. Further, accuracy may be enhanced if access to private information is allowed in controllable and accountable ways. Our proposed approach involves three key elements. First, a model to link unstructured information such as that found in email, image and document repositories with structured information such as that in traditional databases. Second, a model for accuracy management and entity disambiguation by proactively preventing, detecting and tracing errors in information bases. Third, a model to provide privacy-governed operation as accountability and accuracy are managed.

  2. Use of MERRA-2 in the National Solar Radiation Database and Beyond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Lopez, Anthony; Habte, Aron

    The National Solar Radiation Database (NSRDB) is a flagship product of NREL that provides solar radiation and ancillary meteorological information through a GIS based portal. This data is provided at a 4kmx4km spatial and 30 minute temporal resolution covering the period between 1998-2015. The gridded data that is distributed by the NSRDB is derived from satellite measurements using the Physical Solar Model (PSM) that contains a 2-stage approach. This 2-stage approach consists of first retrieving cloud properties using measurement from the GOES series of satellites and using that information in a radiative transfer model to estimate solar radiation at themore » surface. In addition to the satellite data the model requires ancillary meteorological information that is provided mainly by NASA's Modern Era Retrospecitve Analysis for Research and Applications (MERRA-2) 2 model output. This presentation provides an insight into how the NSRDB is developed using the PSM and how the various sources of data including the MERRA-2 data is used during the process.« less

  3. Informing Mechanistic Toxicology with Computational Molecular Models

    EPA Science Inventory

    Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo effo...

  4. The Use of a Context-Based Information Retrieval Technique

    DTIC Science & Technology

    2009-07-01

    provided in context. Latent Semantic Analysis (LSA) is a statistical technique for inferring contextual and structural information, and previous studies...WAIS). 10 DSTO-TR-2322 1.4.4 Latent Semantic Analysis LSA, which is also known as latent semantic indexing (LSI), uses a statistical and...1.4.6 Language Models In contrast, natural language models apply algorithms that combine statistical information with semantic information. Semantic

  5. Training Community Modeling and Simulation Business Plan: 2009 Edition

    DTIC Science & Technology

    2010-04-01

    strategic information assurance 33 33 Provide crisis action procedures training 34 34 Provide the IC SOF-specific training at the operational level... information and products • Collaborative analysis processes • Dissemination of information throughout a command and to subordinates by redundant means...centric M&S capabilities will improve training for information warfare, assist with training for homeland defense operations, crisis -management plan- ning

  6. Information Interaction: Providing a Framework for Information Architecture.

    ERIC Educational Resources Information Center

    Toms, Elaine G.

    2002-01-01

    Discussion of information architecture focuses on a model of information interaction that bridges the gap between human and computer and between information behavior and information retrieval. Illustrates how the process of information interaction is affected by the user, the system, and the content. (Contains 93 references.) (LRW)

  7. Evaluating models of remember-know judgments: complexity, mimicry, and discriminability.

    PubMed

    Cohen, Andrew L; Rotello, Caren M; Macmillan, Neil A

    2008-10-01

    Remember-know judgments provide additional information in recognition memory tests, but the nature of this information and the attendant decision process are in dispute. Competing models have proposed that remember judgments reflect a sum of familiarity and recollective information (the one-dimensional model), are based on a difference between these strengths (STREAK), or are purely recollective (the dual-process model). A choice among these accounts is sometimes made by comparing the precision of their fits to data, but this strategy may be muddied by differences in model complexity: Some models that appear to provide good fits may simply be better able to mimic the data produced by other models. To evaluate this possibility, we simulated data with each of the models in each of three popular remember-know paradigms, then fit those data to each of the models. We found that the one-dimensional model is generally less complex than the others, but despite this handicap, it dominates the others as the best-fitting model. For both reasons, the one-dimensional model should be preferred. In addition, we found that some empirical paradigms are ill-suited for distinguishing among models. For example, data collected by soliciting remember/know/new judgments--that is, the trinary task--provide a particularly weak ground for distinguishing models. Additional tables and figures may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, at www.psychonomic.org/archive.

  8. In-Service Workshop Model. Development Work, Volunteer Service and Project Review. Core Curriculum Resource Materials.

    ERIC Educational Resources Information Center

    Edwards, Dan

    A model is provided for an inservice workshop to provide systematic project review, conduct individual volunteer support and problem solving, and conduct future work planning. Information on model use and general instructions are presented. Materials are provided for 12 sessions covering a 5-day period. The first session on climate setting and…

  9. A Product Development Decision Model for Cockpit Weather Information System

    NASA Technical Reports Server (NTRS)

    Sireli, Yesim; Kauffmann, Paul; Gupta, Surabhi; Kachroo, Pushkin; Johnson, Edward J., Jr. (Technical Monitor)

    2003-01-01

    There is a significant market demand for advanced cockpit weather information products. However, it is unclear how to identify the most promising technological options that provide the desired mix of consumer requirements by employing feasible technical systems at a price that achieves market success. This study develops a unique product development decision model that employs Quality Function Deployment (QFD) and Kano's model of consumer choice. This model is specifically designed for exploration and resolution of this and similar information technology related product development problems.

  10. A Product Development Decision Model for Cockpit Weather Information Systems

    NASA Technical Reports Server (NTRS)

    Sireli, Yesim; Kauffmann, Paul; Gupta, Surabhi; Kachroo, Pushkin

    2003-01-01

    There is a significant market demand for advanced cockpit weather information products. However, it is unclear how to identify the most promising technological options that provide the desired mix of consumer requirements by employing feasible technical systems at a price that achieves market success. This study develops a unique product development decision model that employs Quality Function Deployment (QFD) and Kano's model of consumer choice. This model is specifically designed for exploration and resolution of this and similar information technology related product development problems.

  11. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    PubMed Central

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  12. A New Method for Conceptual Modelling of Information Systems

    NASA Astrophysics Data System (ADS)

    Gustas, Remigijus; Gustiene, Prima

    Service architecture is not necessarily bound to the technical aspects of information system development. It can be defined by using conceptual models that are independent of any implementation technology. Unfortunately, the conventional information system analysis and design methods cover just a part of required modelling notations for engineering of service architectures. They do not provide effective support to maintain semantic integrity between business processes and data. Service orientation is a paradigm that can be applied for conceptual modelling of information systems. The concept of service is rather well understood in different domains. It can be applied equally well for conceptualization of organizational and technical information system components. This chapter concentrates on analysis of the differences between service-oriented modelling and object-oriented modelling. Service-oriented method is used for semantic integration of information system static and dynamic aspects.

  13. Estimates of Soil Moisture Using the Land Information System for Land Surface Water Storage: Case Study for the Western States Water Mission

    NASA Astrophysics Data System (ADS)

    Liu, P. W.; Famiglietti, J. S.; Levoe, S.; Reager, J. T., II; David, C. H.; Kumar, S.; Li, B.; Peters-Lidard, C. D.

    2017-12-01

    Soil moisture is one of the critical factors in terrestrial hydrology. Accurate soil moisture information improves estimation of terrestrial water storage and fluxes, that is essential for water resource management including sustainable groundwater pumping and agricultural irrigation practices. It is particularly important during dry periods when water stress is high. The Western States Water Mission (WSWM), a multiyear mission project of NASA's Jet Propulsion Laboratory, is operated to understand and estimate quantities of the water availability in the western United States by integrating observations and measurements from in-situ and remote sensing sensors, and hydrological models. WSWM data products have been used to assess and explore the adverse impacts of the California drought (2011-2016) and provide decision-makers information for water use planning. Although the observations are often more accurate, simulations using land surface models can provide water availability estimates at desired spatio-temporal scales. The Land Information System (LIS), developed by NASA's Goddard Space Flight Center, integrates developed land surface models and data processing and management tools, that enables to utilize the measurements and observations from various platforms as forcings in the high performance computing environment to forecast the hydrologic conditions. The goal of this study is to implement the LIS in the western United States for estimates of soil moisture. We will implement the NOAH-MP model at the 12km North America Land Data Assimilation System grid and compare to other land surface models included in the LIS. Findings will provide insight into the differences between model estimates and model physics. Outputs from a multi-model ensemble from LIS can also be used to enhance estimated reliability and provide quantification of uncertainty. We will compare the LIS-based soil moisture estimates to the SMAP enhanced 9 km soil moisture product to understand the mechanistic differences between the model and observation. These outcomes will contribute to the WSWM for providing robust products.

  14. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy

    PubMed Central

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867

  15. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  16. CISNET: Model Documentation

    Cancer.gov

    The Publications pages provide lists of all CISNET publications since the inception of CISNET. Publications are listed by Cancer Site or by Research Topic. The Publication Support and Modeling Resources pages provides access to technical modeling information, raw data, and publication extensions stemming from the work of the CISNET consortium.

  17. A New Model for the Organizational Structure of Medical Record Departments in Hospitals in Iran

    PubMed Central

    Moghaddasi, Hamid; Hosseini, Azamossadat; Sheikhtaheri, Abbas

    2006-01-01

    The organizational structure of medical record departments in Iran is not appropriate for the efficient management of healthcare information. In addition, there is no strong information management division to provide comprehensive information management services in hospitals in Iran. Therefore, a suggested model was designed based on four main axes: 1) specifications of a Health Information Management Division, 2) specifications of a Healthcare Information Management Department, 3) the functions of the Healthcare Information Management Department, and 4) the units of the Healthcare Information Management Department. The validity of the model was determined through use of the Delphi technique. The results of the validation process show that the majority of experts agree with the model and consider it to be appropriate and applicable for hospitals in Iran. The model is therefore recommended for hospitals in Iran. PMID:18066362

  18. Early experiences in evolving an enterprise-wide information model for laboratory and clinical observations.

    PubMed

    Chen, Elizabeth S; Zhou, Li; Kashyap, Vipul; Schaeffer, Molly; Dykes, Patricia C; Goldberg, Howard S

    2008-11-06

    As Electronic Healthcare Records become more prevalent, there is an increasing need to ensure unambiguous data capture, interpretation, and exchange within and across heterogeneous applications. To address this need, a common, uniform, and comprehensive approach for representing clinical information is essential. At Partners HealthCare System, we are investigating the development and implementation of enterprise-wide information models to specify the representation of clinical information to support semantic interoperability. This paper summarizes our early experiences in: (1) defining a process for information model development, (2) reviewing and comparing existing healthcare information models, (3) identifying requirements for representation of laboratory and clinical observations, and (4) exploring linkages to existing terminology and data standards. These initial findings provide insight to the various challenges ahead and guidance on next steps for adoption of information models at our organization.

  19. Electronic health record use among cancer patients: Insights from the Health Information National Trends Survey.

    PubMed

    Strekalova, Yulia A

    2017-04-01

    Over 90% of US hospitals provide patients with access to e-copy of their health records, but the utilization of electronic health records by the US consumers remains low. Guided by the comprehensive information-seeking model, this study used data from the National Cancer Institute's Health Information National Trends Survey 4 (Cycle 4) and examined the factors that explain the level of electronic health record use by cancer patients. Consistent with the model, individual information-seeking factors and perceptions of security and utility were associated with the frequency of electronic health record access. Specifically, higher income, prior online information seeking, interest in accessing health information online, and normative beliefs were predictive of electronic health record access. Conversely, poorer general health status and lack of health care provider encouragement to use electronic health records were associated with lower utilization rates. The current findings provide theory-based evidence that contributes to the understanding of the explanatory factors of electronic health record use and suggest future directions for research and practice.

  20. [DESCRIPTION AND PRESENTATION OF THE RESULTS OF ELECTROENCEPHALOGRAM PROCESSING USING AN INFORMATION MODEL].

    PubMed

    Myznikov, I L; Nabokov, N L; Rogovanov, D Yu; Khankevich, Yu R

    2016-01-01

    The paper proposes to apply the informational modeling of correlation matrix developed by I.L. Myznikov in early 1990s in neurophysiological investigations, such as electroencephalogram recording and analysis, coherence description of signals from electrodes on the head surface. The authors demonstrate information models built using the data from studies of inert gas inhalation by healthy human subjects. In the opinion of the authors, information models provide an opportunity to describe physiological processes with a high level of generalization. The procedure of presenting the EEG results holds great promise for the broad application.

  1. A Modeling Approach to the Development of Students' Informal Inferential Reasoning

    ERIC Educational Resources Information Center

    Doerr, Helen M.; Delmas, Robert; Makar, Katie

    2017-01-01

    Teaching from an informal statistical inference perspective can address the challenge of teaching statistics in a coherent way. We argue that activities that promote model-based reasoning address two additional challenges: providing a coherent sequence of topics and promoting the application of knowledge to novel situations. We take a models and…

  2. A New Polar Transfer Alignment Algorithm with the Aid of a Star Sensor and Based on an Adaptive Unscented Kalman Filter.

    PubMed

    Cheng, Jianhua; Wang, Tongda; Wang, Lu; Wang, Zhenmin

    2017-10-23

    Because of the harsh polar environment, the master strapdown inertial navigation system (SINS) has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA) algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF) is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment.

  3. A New Polar Transfer Alignment Algorithm with the Aid of a Star Sensor and Based on an Adaptive Unscented Kalman Filter

    PubMed Central

    Cheng, Jianhua; Wang, Tongda; Wang, Lu; Wang, Zhenmin

    2017-01-01

    Because of the harsh polar environment, the master strapdown inertial navigation system (SINS) has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA) algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF) is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment. PMID:29065521

  4. Use of Time Information in Models behind Adaptive System for Building Fluency in Mathematics

    ERIC Educational Resources Information Center

    Rihák, Jirí

    2015-01-01

    In this work we introduce the system for adaptive practice of foundations of mathematics. Adaptivity of the system is primarily provided by selection of suitable tasks, which uses information from a domain model and a student model. The domain model does not use prerequisites but works with splitting skills to more concrete sub-skills. The student…

  5. Building team adaptive capacity: the roles of sensegiving and team composition.

    PubMed

    Randall, Kenneth R; Resick, Christian J; DeChurch, Leslie A

    2011-05-01

    The current study draws on motivated information processing in groups theory to propose that leadership functions and composition characteristics provide teams with the epistemic and social motivation needed for collective information processing and strategy adaptation. Three-person teams performed a city management decision-making simulation (N=74 teams; 222 individuals). Teams first managed a simulated city that was newly formed and required growth strategies and were then abruptly switched to a second simulated city that was established and required revitalization strategies. Consistent with hypotheses, external sensegiving and team composition enabled distinct aspects of collective information processing. Sensegiving prompted the emergence of team strategy mental models (i.e., cognitive information processing); psychological collectivism facilitated information sharing (i.e., behavioral information processing); and cognitive ability provided the capacity for both the cognitive and behavioral aspects of collective information processing. In turn, team mental models and information sharing enabled reactive strategy adaptation.

  6. An Information Theoretic Analysis of Classification Sorting and Cognition by Ninth Grade Children within a Piagetian Setting.

    ERIC Educational Resources Information Center

    Dunlop, David Livingston

    The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…

  7. Providing Information to Parents of Children with Mental Health Problems: A Discrete Choice Conjoint Analysis of Professional Preferences

    ERIC Educational Resources Information Center

    Cunningham, Charles E.; Deal, Ken; Rimas, Heather; Chen, Yvonne; Buchanan, Don H.; Sdao-Jarvie, Kathie

    2009-01-01

    We used discrete choice conjoint analysis to model the ways 645 children's mental health (CMH) professionals preferred to provide information to parents seeking CMH services. Participants completed 20 choice tasks presenting experimentally varied combinations of the study's 14 4-level CMH information transfer attributes. Latent class analysis…

  8. Data Requirements and the Basis for Designing Health Information Kiosks.

    PubMed

    Afzali, Mina; Ahmadi, Maryam; Mahmoudvand, Zahra

    2017-09-01

    Health kiosks are an innovative and cost-effective solution that organizations can easily implement to help educate people. To determine the data requirements and basis for designing health information kiosks as a new technology to maintain the health of society. By reviewing the literature, a list of information requirements was provided in 4 sections (demographic information, general information, diagnostic information and medical history), and questions related to the objectives, data elements, stakeholders, requirements, infrastructures and the applications of health information kiosks were provided. In order to determine the content validity of the designed set, the opinions of 2 physicians and 2 specialists in medical informatics were obtained. The test-retest method was used to measure its reliability. Data were analyzed using SPSS software. In the proposed model for Iran, 170 data elements in 6 sections were presented for experts' opinion, which ultimately, on 106 elements, a collective agreement was reached. To provide a model of health information kiosk, creating a standard data set is a critical point. According to a survey conducted on the various literature review studies related to the health information kiosk, the most important components of a health information kiosk include six categories; information needs, data elements, applications, stakeholders, requirements and infrastructure of health information kiosks that need to be considered when designing a health information kiosk.

  9. Assessing the effects of pharmacists' perceived organizational support, organizational commitment and turnover intention on provision of medication information at community pharmacies in Lithuania: a structural equation modeling approach.

    PubMed

    Urbonas, Gvidas; Kubilienė, Loreta; Kubilius, Raimondas; Urbonienė, Aušra

    2015-03-01

    As a member of a pharmacy organization, a pharmacist is not only bound to fulfill his/her professional obligations but is also affected by different personal and organizational factors that may influence his/her behavior and, consequently, the quality of the services he/she provides to patients. The main purpose of the research was to test a hypothesized model of the relationships among several organizational variables, and to investigate whether any of these variables affects the service of provision of medication information at community pharmacies. During the survey, pharmacists working at community pharmacies in Lithuania were asked to express their opinions on the community pharmacies at which they worked and to reflect on their actions when providing information on medicines to their patients. The statistical data were analyzed by applying a structural equation modeling technique to test the hypothesized model of the relationships among the variables of Perceived Organizational Support, Organizational Commitment, Turnover Intention, and Provision of Medication Information. The final model revealed that Organizational Commitment had a positive direct effect on Provision of Medication Information (standardized estimate = 0.27) and a negative direct effect (standardized estimate = -0.66) on Turnover Intention. Organizational Commitment mediated the indirect effects of Perceived Organizational Support on Turnover Intention (standardized estimate = -0.48) and on Provision of Medication Information (standardized estimate = 0.20). Pharmacists' Turnover Intention had no significant effect on Provision of Medication Information. Community pharmacies may be viewed as encouraging, to some extent, the service of provision of medication information. Pharmacists who felt higher levels of support from their organizations also expressed, to a certain extent, higher commitment to their organizations by providing more consistent medication information to patients. However, the effect of organizational variables on the variable of Provision of Medication Information appeared to be limited.

  10. The methodology of database design in organization management systems

    NASA Astrophysics Data System (ADS)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  11. Operator interface design considerations for a PACS information management system

    NASA Astrophysics Data System (ADS)

    Steinke, James E.; Nabijee, Kamal H.; Freeman, Rick H.; Prior, Fred W.

    1990-08-01

    As prototype PACS grow into fully digital departmental and hospital-wide systems, effective information storage and retrieval mechanisms become increasingly important. Thus far, designers of PACS workstations have concentrated on image communication and display functionality. The new challenge is to provide appropriate operator interface environments to facilitate information retrieval. The "Marburg Model" 1 provides a detailed analysis of the functions, control flows and data structures used in Radiology. It identifies a set of "actors" who perform information manipulation functions. Drawing on this model and its associated methodology it is possible to identify four modes of use of information systems in Radiology: Clinical Routine, Research, Consultation, and Administration. Each mode has its own specific access requirements and views of information. An operator interface strategy appropriate for each mode will be proposed. Clinical Routine mode is the principal concern of PACS primary diagnosis workstations. In a full PACS implementation, such workstations must provide a simple and consistent navigational aid for the on-line image database, a local work list of cases to be reviewed, and easy access to information from other hospital information systems. A hierarchical method of information access is preferred because it provides the ability to start at high-level entities and iteratively narrow the scope of information from which to select subsequent operations. An implementation using hierarchical, nested software windows which fulfills such requirements shall be examined.

  12. Application of information technology to the National Launch System

    NASA Technical Reports Server (NTRS)

    Mauldin, W. T.; Smith, Carolyn L.; Monk, Jan C.; Davis, Steve; Smith, Marty E.

    1992-01-01

    The approach to the development of the Unified Information System (UNIS) to provide in a timely manner all the information required to manage, design, manufacture, integrate, test, launch, operate, and support the Advanced Launch System (NLS), as well as the current and planned capabilities are described. STESYM, the Space Transportation Main Engine (STME) development program, is comprised of a collection of data models which can be grouped into two primary models: the Engine Infrastructure Model (ENGIM) and the Engine Integrated Cast Model (ENGICOM). ENGIM is an end-to-end model of the infrastructure needed to perform the fabrication, assembly, and testing of the STEM program and its components. Together, UNIS and STESYM are to provide NLS managers and engineers with the ability to access various types and files of data quickly and use that data to assess the capabilities of the STEM program.

  13. The DoD Enterprise Model. Volume 1. Strategic Activity and Data Models

    DTIC Science & Technology

    1994-01-01

    Provide Administrative Services: Inform & Advise provides explanations and expert opinions to people on such matters as health benefits , legal rights...level functional template for all DoD Corporate Information Management (CIM) initiatives. Major Defense activities have already benefitted from using...evaluating plan performance "• DvWm9 C"S" of Acto (e.g., occupational safety and health , "• De 10 Fl environmental protection, technology transfer

  14. An Ontology of Quality Initiatives and a Model for Decentralized, Collaborative Quality Management on the (Semantic) World Wide Web

    PubMed Central

    2001-01-01

    This editorial provides a model of how quality initiatives concerned with health information on the World Wide Web may in the future interact with each other. This vision fits into the evolving "Semantic Web" architecture - ie, the prospective that the World Wide Web may evolve from a mess of unstructured, human-readable information sources into a global knowledge base with an additional layer providing richer and more meaningful relationships between resources. One first prerequisite for forming such a "Semantic Web" or "web of trust" among the players active in quality management of health information is that these initiatives make statements about themselves and about each other in a machine-processable language. I present a concrete model on how this collaboration could look, and provide some recommendations on what the role of the World Health Organization (WHO) and other policy makers in this framework could be. PMID:11772549

  15. An ontology of quality initiatives and a model for decentralized, collaborative quality management on the (semantic) World-Wide-Web.

    PubMed

    Eysenbach, G

    2001-01-01

    This editorial provides a model of how quality initiatives concerned with health information on the World Wide Web may in the future interact with each other. This vision fits into the evolving "Semantic Web" architecture - ie, the prospective that the World Wide Web may evolve from a mess of unstructured, human-readable information sources into a global knowledge base with an additional layer providing richer and more meaningful relationships between resources. One first prerequisite for forming such a "Semantic Web" or "web of trust" among the players active in quality management of health information is that these initiatives make statements about themselves and about each other in a machine-processable language. I present a concrete model on how this collaboration could look, and provide some recommendations on what the role of the World Health Organization (WHO) and other policy makers in this framework could be.

  16. SMC: SCENIC Model Control

    NASA Technical Reports Server (NTRS)

    Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.

    2015-01-01

    NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.

  17. Data-assisted protein structure modeling by global optimization in CASP12.

    PubMed

    Joo, Keehyoung; Heo, Seungryong; Joung, InSuk; Hong, Seung Hwan; Lee, Sung Jong; Lee, Jooyoung

    2018-03-01

    In CASP12, 2 types of data-assisted protein structure modeling were experimented. Either SAXS experimental data or cross-linking experimental data was provided for a selected number of CASP12 targets that the CASP12 predictor could utilize for better protein structure modeling. We devised 2 separate energy terms for SAXS data and cross-linking data to drive the model structures into more native-like structures that satisfied the given experimental data as much as possible. In CASP11, we successfully performed protein structure modeling using simulated sparse and ambiguously assigned NOE data and/or correct residue-residue contact information, where the only energy term that folded the protein into its native structure was the term which was originated from the given experimental data. However, the 2 types of experimental data provided in CASP12 were far from being sufficient enough to fold the target protein into its native structure because SAXS data provides only the overall shape of the molecule and the cross-linking contact information provides only very low-resolution distance information. For this reason, we combined the SAXS or cross-linking energy term with our regular modeling energy function that includes both the template energy term and the de novo energy terms. By optimizing the newly formulated energy function, we obtained protein models that fit better with provided SAXS data than the X-ray structure of the target. However, the improvement of the model relative to the 1 modeled without the SAXS data, was not significant. Consistent structural improvement was achieved by incorporating cross-linking data into the protein structure modeling. © 2018 Wiley Periodicals, Inc.

  18. Information-theoretic model comparison unifies saliency metrics

    PubMed Central

    Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias

    2015-01-01

    Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340

  19. 12 CFR Appendix A to Part 216 - Model Privacy Form

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... claim history; medical information; overdraft history; purchase history; account transactions; risk...; checking account information; employment information; wire transfer instructions. (c) General instructions... account; enter into an investment advisory contract; give us your income information; provide employment...

  20. 12 CFR Appendix A to Part 332 - Model Privacy Form

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... claim history; medical information; overdraft history; purchase history; account transactions; risk...; checking account information; employment information; wire transfer instructions. (c) General instructions... account; enter into an investment advisory contract; give us your income information; provide employment...

  1. 16 CFR Appendix A to Part 313 - Model Privacy Form

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... scores; insurance claim history; medical information; overdraft history; purchase history; account...; retirement assets; checking account information; employment information; wire transfer instructions. (c... account; enter into an investment advisory contract; give us your income information; provide employment...

  2. Australian Seismological Reference Model (AuSREM): crustal component

    NASA Astrophysics Data System (ADS)

    Salmon, M.; Kennett, B. L. N.; Saygin, E.

    2013-01-01

    Although Australia has been the subject of a wide range of seismological studies, these have concentrated on specific features of the continent at crustal scales and on the broad scale features in the mantle. The Australian Seismological Reference Model (AuSREM) is designed to bring together the existing information, and provide a synthesis in the form of a 3-D model that can provide the basis for future refinement from more detailed studies. Extensive studies in the last few decades provide good coverage for much of the continent, and the crustal model builds on the various data sources to produce a representative model that captures the major features of the continental structure and provides a basis for a broad range of further studies. The model is grid based with a 0.5° sampling in latitude and longitude, and is designed to be fully interpolable, so that properties can be extracted at any point. The crustal structure is built from five-layer representations of refraction and receiver function studies and tomographic information. The AuSREM crustal model is available at 1 km intervals. The crustal component makes use of prior compilations of sediment thicknesses, with cross checks against recent reflection profiling, and provides P and S wavespeed distributions through the crust. The primary information for P wavespeed comes from refraction profiles, for S wavespeed from receiver function studies. We are also able to use the results of ambient noise tomography to link the point observations into national coverage. Density values are derived using results from gravity interpretations with an empirical relation between P wavespeed and density. AuSREM is able to build on a new map of depth to Moho, which has been created using all available information including Moho picks from over 12 000 km of full crustal profiling across the continent. The crustal component of AuSREM provides a representative model that should be useful for modelling of seismic wave propagation and calculation of crustal corrections for tomography. Other applications include gravity studies and dynamic topography at the continental scale.

  3. Classifying the health of Connecticut streams using benthic macroinvertebrates with implications for water management.

    PubMed

    Bellucci, Christopher J; Becker, Mary E; Beauchene, Mike; Dunbar, Lee

    2013-06-01

    Bioassessments have formed the foundation of many water quality monitoring programs throughout the United States. Like many state water quality programs, Connecticut has developed a relational database containing information about species richness, species composition, relative abundance, and feeding relationships among macroinvertebrates present in stream and river systems. Geographic Information Systems can provide estimates of landscape condition and watershed characteristics and when combined with measurements of stream biology, provide a useful visual display of information that is useful in a management context. The objective of our study was to estimate the stream health for all wadeable stream kilometers in Connecticut using a combination of macroinvertebrate metrics and landscape variables. We developed and evaluated models using an information theoretic approach to predict stream health as measured by macroinvertebrate multimetric index (MMI) and identified the best fitting model as a three variable model, including percent impervious land cover, a wetlands metric, and catchment slope that best fit the MMI scores (adj-R (2) = 0.56, SE = 11.73). We then provide examples of how modeling can augment existing programs to support water management policies under the Federal Clean Water Act such as stream assessments and anti-degradation.

  4. Using the results of a satisfaction survey to demonstrate the impact of a new library service model.

    PubMed

    Powelson, Susan E; Reaume, Renee D

    2012-09-01

     In 2005, the University of Calgary entered into a contract to provide library services to the staff and physicians of Alberta Health Services Calgary Zone (AHS CZ), creating the Health Information Network Calgary (HINC).  A user satisfaction survey was contractually required to determine whether the new library service model created through the agreement with the University of Calgary was successful. Our additional objective was to determine whether information and resources provided through the HINC were making an impact on patient care.  A user satisfaction survey of 18 questions was created in collaboration with AHS CZ contract partners and distributed using the snowball or convenience sample method.  Six hundred and ninety-four surveys were returned. Of respondents, 75% use the HINC library services. More importantly, 43% of respondents indicated that search results provided by library staff had a direct impact on patient care decisions.  Alberta Health Services Calgary Zone staff are satisfied with the new service delivery model, they are taking advantage of the services offered, and using library provided information to improve patient care. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.

  5. Trust-based information system architecture for personal wellness.

    PubMed

    Ruotsalainen, Pekka; Nykänen, Pirkko; Seppälä, Antto; Blobel, Bernd

    2014-01-01

    Modern eHealth, ubiquitous health and personal wellness systems take place in an unsecure and ubiquitous information space where no predefined trust occurs. This paper presents novel information model and an architecture for trust based privacy management of personal health and wellness information in ubiquitous environment. The architecture enables a person to calculate a dynamic and context-aware trust value for each service provider, and using it to design personal privacy policies for trustworthy use of health and wellness services. For trust calculation a novel set of measurable context-aware and health information-sensitive attributes is developed. The architecture enables a person to manage his or her privacy in ubiquitous environment by formulating context-aware and service provider specific policies. Focus groups and information modelling was used for developing a wellness information model. System analysis method based on sequential steps that enable to combine results of analysis of privacy and trust concerns and the selection of trust and privacy services was used for development of the information system architecture. Its services (e.g. trust calculation, decision support, policy management and policy binding services) and developed attributes enable a person to define situation-aware policies that regulate the way his or her wellness and health information is processed.

  6. Applying the concept of consumer confusion to healthcare: development and validation of a patient confusion model.

    PubMed

    Gebele, Christoph; Tscheulin, Dieter K; Lindenmeier, Jörg; Drevs, Florian; Seemann, Ann-Kathrin

    2014-01-01

    As patient autonomy and consumer sovereignty increase, information provision is considered essential to decrease information asymmetries between healthcare service providers and patients. However, greater availability of third party information sources can have negative side effects. Patients can be confused by the nature, as well as the amount, of quality information when making choices among competing health care providers. Therefore, the present study explores how information may cause patient confusion and affect the behavioral intention to choose a health care provider. Based on a quota sample of German citizens (n = 198), the present study validates a model of patient confusion in the context of hospital choice. The study results reveal that perceived information overload, perceived similarity, and perceived ambiguity of health information impact the affective and cognitive components of patient confusion. Confused patients have a stronger inclination to hastily narrow down their set of possible decision alternatives. Finally, an empirical analysis reveals that the affective and cognitive components of patient confusion mediate perceived information overload, perceived similarity, and perceived ambiguity of information. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. 12 CFR Appendix A to Part 573 - Model Privacy Form

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... rates and payments; retirement assets; checking account information; employment information; wire... identified as “[account #].” Institutions that require additional or different information, such as a random... for financing; apply for a lease; provide account information; give us your contact information; pay...

  8. 12 CFR Appendix A to Part 40 - Model Privacy Form

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... rates and payments; retirement assets; checking account information; employment information; wire... identified as “[account #].” Institutions that require additional or different information, such as a random... for financing; apply for a lease; provide account information; give us your contact information; pay...

  9. 12 CFR Appendix A to Part 716 - Model Privacy Form

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... rates and payments; retirement assets; checking account information; employment information; wire... identified as “[account #].” Institutions that require additional or different information, such as a random... for financing; apply for a lease; provide account information; give us your contact information; pay...

  10. Thinking Developmentally: The Next Evolution in Models of Health.

    PubMed

    Garner, Andrew S

    2016-09-01

    As the basic sciences that inform conceptions of human health advance, so must the models that are used to frame additional research, to teach the next generation of providers, and to inform health policy. This article briefly reviews the evolution from a biomedical model to a biopsychosocial (BPS) model and to an ecobiodevelopmental (EBD) model. Like the BPS model, the EBD model reaffirms the biological significance of psychosocial features within the patient's ecology, but it does so at the molecular and cellular levels. More importantly, the EBD model adds the dimension of time, forcing providers to "think developmentally" and to acknowledge the considerable biological and psychological consequences of previous experiences. For the health care system to move from a reactive "sick care" system to a proactive "well care" system, all providers must begin thinking developmentally by acknowledging the dynamic but cumulative dance between nature and nurture that drives development, behavior, and health, not only in childhood, but across the lifespan.

  11. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  12. Sensitivity Analysis of Dispersion Model Results in the NEXUS Health Study Due to Uncertainties in Traffic-Related Emissions Inputs

    EPA Science Inventory

    Dispersion modeling tools have traditionally provided critical information for air quality management decisions, but have been used recently to provide exposure estimates to support health studies. However, these models can be challenging to implement, particularly in near-road s...

  13. Information of Complex Systems and Applications in Agent Based Modeling.

    PubMed

    Bao, Lei; Fritchman, Joseph C

    2018-04-18

    Information about a system's internal interactions is important to modeling the system's dynamics. This study examines the finer categories of the information definition and explores the features of a type of local information that describes the internal interactions of a system. Based on the results, a dual-space agent and information modeling framework (AIM) is developed by explicitly distinguishing an information space from the material space. The two spaces can evolve both independently and interactively. The dual-space framework can provide new analytic methods for agent based models (ABMs). Three examples are presented including money distribution, individual's economic evolution, and artificial stock market. The results are analyzed in the dual-space, which more clearly shows the interactions and evolutions within and between the information and material spaces. The outcomes demonstrate the wide-ranging applicability of using the dual-space AIMs to model and analyze a broad range of interactive and intelligent systems.

  14. Health promotion communications system: a model for a dispersed population.

    PubMed

    Foran, M; Campanelli, L C

    1995-11-01

    1. Corporations with geographically dispersed populations need to provide flexible health promotion programs tailored to meet specific employee interests and needs. 2. Bell Atlantic developed a dispersed model approach based on the transtheoretical model of behavior change. The key to this model is to identify at which stage the individual is operating and provide appropriate information and behavior change programs. 3. Components of the program include: health risk appraisal; exercise/activity tracking system; on line nurse health information service; network of fitness facilities; employee assistance programs; health library available by fax; health film library; network of health promotion volunteers; and targeted health and marketing messaged via corporate media.

  15. Information Model Translation to Support a Wider Science Community

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Daniel; Ritschel, Bernd; Hardman, Sean; Joyner, Ronald

    2014-05-01

    The Planetary Data System (PDS), NASA's long-term archive for solar system exploration data, has just released PDS4, a modernization of the PDS architecture, data standards, and technical infrastructure. This next generation system positions the PDS to meet the demands of the coming decade, including big data, international cooperation, distributed nodes, and multiple ways of analysing and interpreting data. It also addresses three fundamental project goals: providing more efficient data delivery by data providers to the PDS, enabling a stable, long-term usable planetary science data archive, and enabling services for the data consumer to find, access, and use the data they require in contemporary data formats. The PDS4 information architecture is used to describe all PDS data using a common model. Captured in an ontology modeling tool it supports a hierarchy of data dictionaries built to the ISO/IEC 11179 standard and is designed to increase flexibility, enable complex searches at the product level, and to promote interoperability that facilitates data sharing both nationally and internationally. A PDS4 information architecture design requirement stipulates that the content of the information model must be translatable to external data definition languages such as XML Schema, XMI/XML, and RDF/XML. To support the semantic Web standards we are now in the process of mapping the contents into RDF/XML to support SPARQL capable databases. We are also building a terminological ontology to support virtually unified data retrieval and access. This paper will provide an overview of the PDS4 information architecture focusing on its domain information model and how the translation and mapping are being accomplished.

  16. ENVIRONMENTAL INFORMATION MANAGEMENT SYSTEM (EIMS)

    EPA Science Inventory

    The Environmental Information Management System (EIMS) organizes descriptive information (metadata) for data sets, databases, documents, models, projects, and spatial data. The EIMS design provides a repository for scientific documentation that can be easily accessed with standar...

  17. Measuring situational awareness and resolving inherent high-level fusion obstacles

    NASA Astrophysics Data System (ADS)

    Sudit, Moises; Stotz, Adam; Holender, Michael; Tagliaferri, William; Canarelli, Kathie

    2006-04-01

    Information Fusion Engine for Real-time Decision Making (INFERD) is a tool that was developed to supplement current graph matching techniques in Information Fusion models. Based on sensory data and a priori models, INFERD dynamically generates, evolves, and evaluates hypothesis on the current state of the environment. The a priori models developed are hierarchical in nature lending them to a multi-level Information Fusion process whose primary output provides a situational awareness of the environment of interest in the context of the models running. In this paper we look at INFERD's multi-level fusion approach and provide insight on the inherent problems such as fragmentation in the approach and the research being undertaken to mitigate those deficiencies. Due to the large variance of data in disparate environments, the awareness of situations in those environments can be drastically different. To accommodate this, the INFERD framework provides support for plug-and-play fusion modules which can be developed specifically for domains of interest. However, because the models running in INFERD are graph based, some default measurements can be provided and will be discussed in the paper. Among these are a Depth measurement to determine how much danger is presented by the action taking place, a Breadth measurement to gain information regarding the scale of an attack that is currently happening, and finally a Reliability measure to tell the user the credibility of a particular hypothesis. All of these results will be demonstrated in the Cyber domain where recent research has shown to be an area that is welldefined and bounded, so that new models and algorithms can be developed and evaluated.

  18. A role for the developing lexicon in phonetic category acquisition

    PubMed Central

    Feldman, Naomi H.; Griffiths, Thomas L.; Goldwater, Sharon; Morgan, James L.

    2013-01-01

    Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning. PMID:24219848

  19. Results from Assimilating AMSR-E Soil Moisture Estimates into a Land Surface Model Using an Ensemble Kalman Filter in the Land Information System

    NASA Technical Reports Server (NTRS)

    Blankenship, Clay B.; Crosson, William L.; Case, Jonathan L.; Hale, Robert

    2010-01-01

    Improve simulations of soil moisture/temperature, and consequently boundary layer states and processes, by assimilating AMSR-E soil moisture estimates into a coupled land surface-mesoscale model Provide a new land surface model as an option in the Land Information System (LIS)

  20. Computer simulation modeling of recreation use: Current status, case studies, and future directions

    Treesearch

    David N. Cole

    2005-01-01

    This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...

  1. Teaching Information & Technology Skills: The Big6[TM] in Elementary Schools. Professional Growth Series.

    ERIC Educational Resources Information Center

    Eisenberg, Michael B.; Berkowitz, Robert E.

    This book about using the Big6 information problem solving process model in elementary schools is organized into two parts. Providing an overview of the Big6 approach, Part 1 includes the following chapters: "Introduction: The Need," including the information problem, the Big6 and other process models, and teaching/learning the Big6;…

  2. Risk Aversion and the Value of Information.

    ERIC Educational Resources Information Center

    Eeckhoudt, Louis; Godfroid, Phillippe

    2000-01-01

    Explains why risk aversion does not always induce a greater information value, but instead may induce a lower information value when increased. Presents a basic model defining the concept of perfect information value and providing a numerical illustration. Includes references. (CMK)

  3. How Adverse Outcome Pathways Can Aid the Development and Use of Computational Prediction Models for Regulatory Toxicology

    PubMed Central

    Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J.; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M. E. (Bette); Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M.; Whelan, Maurice

    2017-01-01

    Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework provides a systematic approach for organizing knowledge that may support such inference. Likewise, computational models of biological systems at various scales provide another means and platform to integrate current biological understanding to facilitate inference and extrapolation. We argue that the systematic organization of knowledge into AOP frameworks can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. This concept was explored as part of a workshop on AOP-Informed Predictive Modeling Approaches for Regulatory Toxicology held September 24–25, 2015. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development is described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment. PMID:27994170

  4. HOW CAN BIOLOGICALLY-BASED MODELING OF ARSENIC KINETICS AND DYNAMICS INFORM THE RISK ASSESSMENT PROCESS?

    EPA Science Inventory

    Quantitative biologically-based models describing key events in the continuum from arsenic exposure to the development of adverse health effects provide a framework to integrate information obtained across diverse research areas. For example, genetic polymorphisms in arsenic met...

  5. The development and evaluation of a nursing information system for caring clinical in-patient.

    PubMed

    Fang, Yu-Wen; Li, Chih-Ping; Wang, Mei-Hua

    2015-01-01

    The research aimed to develop a nursing information system in order to simplify the admission procedure for caring clinical in-patient, enhance the efficiency of medical information documentation. Therefore, by correctly delivering patients’ health records, and providing continues care, patient safety and care quality would be effectively improved. The study method was to apply Spiral Model development system to compose a nursing information team. By using strategies of data collection, working environment observation, applying use-case modeling, and conferences of Joint Application Design (JAD) to complete the system requirement analysis and design. The Admission Care Management Information System (ACMIS) mainly included: (1) Admission nursing management information system. (2) Inter-shift meeting information management system. (3) The linkage of drug management system and physical examination record system. The framework contained qualitative and quantitative components that provided both formative and summative elements of the evaluation. System evaluation was to apply information success model, and developed questionnaire of consisting nurses’ acceptance and satisfaction. The results of questionnaires were users’ satisfaction, the perceived self-involvement, age and information quality were positively to personal and organizational effectiveness. According to the results of this study, the Admission Care Management Information System was practical to simplifying clinic working procedure and effective in communicating and documenting admission medical information.

  6. Attitudes and Action.

    ERIC Educational Resources Information Center

    Glander, Molly H.; O'Donnell, William J.

    This handbook provides sexual information for college students. Though designed for students at North Carolina State University, it is a good model for similar publications on other campuses. The booklet begins by defining different forms of sexual activity--solo, casual, relational, and procreational. Other sections provide concise information on…

  7. Value and role of intensive care unit outcome prediction models in end-of-life decision making.

    PubMed

    Barnato, Amber E; Angus, Derek C

    2004-07-01

    In the United States, intensive care unit (ICU) admission at the end of life is commonplace. What is the value and role of ICU mortality prediction models for informing the utility of ICU care?In this article, we review the history, statistical underpinnings,and current deployment of these models in clinical care. We conclude that the use of outcome prediction models to ration care that is unlikely to provide an expected benefit is hampered by imperfect performance, the lack of real-time availability, failure to consider functional outcomes beyond survival, and physician resistance to the use of probabilistic information when death is guaranteed by the decision it informs. Among these barriers, the most important technical deficiency is the lack of automated information systems to provide outcome predictions to decision makers, and the most important research and policy agenda is to understand and address our national ambivalence toward rationing care based on any criterion.

  8. Effects of the amount of feedback information on urban traffic with advanced traveler information system

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Li, Ming; Jiang, Rui; Hu, Mao-Bin

    2017-09-01

    In a real traffic system, information feedback has already been proven to be a good way to alleviate traffic jams. However, due to the massive traffic information of real system, the procedure is often difficult in practice. In this paper, we study the effects of the amount of feedback information based on a cellular automaton model of urban traffic. What we found most interesting is that when providing the traffic information of a part of a road to travelers, the performance of the system will be better than that providing the road's full traffic information. From this basis, we can provide more effective routing strategy with less information. We demonstrate that only providing the traffic information of about first half road from upstream to downstream can maximize the traffic capacity of the system. We also give an explanation for these phenomena by studying the distribution pattern of vehicles and the detailed turning environment at the intersections. The effects of the traffic light period are also provided.

  9. Capturing and Modeling Domain Knowledge Using Natural Language Processing Techniques

    DTIC Science & Technology

    2005-06-01

    Intelligence Artificielle , France, May 2001, p. 109- 118 [Barrière, 2001] -----. “Investigating the Causal Relation in Informative Texts”. Terminology, 7:2...out of the flood of information, military have to create new ways of processing sensor and intelligence information, and of providing the results to...have to create new ways of processing sensor and intelligence information, and of providing the results to commanders who must take timely operational

  10. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.

  11. Reducing Costs and Increasing Productivity in Ship Maintenance Using Product Lifecycle Management, 3D Laser Scanning and 3D Printing

    DTIC Science & Technology

    2014-03-01

    information modeling guide series: 03—GSA BIM guide for 3D imaging (Ver. 1). Retrieved from http://www.gsa.gov/graphics/pbs/GSA_BIM_Guide_Series_03... model during a KVA knowledge audit at FRC San Diego. The information used in the creation of his KVA models was generated from the SME-provided...Kenney then used the information gathered during SME interviews to reengineer the process to include 3D printing to form his “to-be” model . The

  12. Models Extracted from Text for System-Software Safety Analyses

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2010-01-01

    This presentation describes extraction and integration of requirements information and safety information in visualizations to support early review of completeness, correctness, and consistency of lengthy and diverse system safety analyses. Software tools have been developed and extended to perform the following tasks: 1) extract model parts and safety information from text in interface requirements documents, failure modes and effects analyses and hazard reports; 2) map and integrate the information to develop system architecture models and visualizations for safety analysts; and 3) provide model output to support virtual system integration testing. This presentation illustrates the methods and products with a rocket motor initiation case.

  13. Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.

    PubMed

    Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio

    2010-03-26

    Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.

  14. The Smoothed Dirichlet Distribution: Understanding Cross-Entropy Ranking in Information Retrieval

    DTIC Science & Technology

    2006-07-01

    reflect those of the spon- sor. viii ABSTRACT Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses...the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that uses the same cross-entropy as its ranking function...In addition, the SD based classifier provides more flexibility than RM in modeling documents owing to a consistent generative framework . We

  15. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    PubMed

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Processing of angular motion and gravity information through an internal model.

    PubMed

    Laurens, Jean; Straumann, Dominik; Hess, Bernhard J M

    2010-09-01

    The vestibular organs in the base of the skull provide important information about head orientation and motion in space. Previous studies have suggested that both angular velocity information from the semicircular canals and information about head orientation and translation from the otolith organs are centrally processed in an internal model of head motion, using the principles of optimal estimation. This concept has been successfully applied to model behavioral responses to classical vestibular motion paradigms. This study measured the dynamic of the vestibuloocular reflex during postrotatory tilt, tilt during the optokinetic afternystagmus, and off-vertical axis rotation. The influence of otolith signal on the VOR was systematically varied by using a series of tilt angles. We found that the time constants of responses varied almost identically as a function of gravity in these paradigms. We show that Bayesian modeling could predict the experimental results in an accurate and consistent manner. In contrast to other approaches, the Bayesian model also provides a plausible explanation of why these vestibulooculo motor responses occur as a consequence of an internal process of optimal motion estimation.

  17. Model-Based Assurance Case+ (MBAC+): Tutorial on Modeling Radiation Hardness Assurance Activities

    NASA Technical Reports Server (NTRS)

    Austin, Rebekah; Label, Ken A.; Sampson, Mike J.; Evans, John; Witulski, Art; Sierawski, Brian; Karsai, Gabor; Mahadevan, Nag; Schrimpf, Ron; Reed, Robert A.

    2017-01-01

    This presentation will cover why modeling is useful for radiation hardness assurance cases, and also provide information on Model-Based Assurance Case+ (MBAC+), NASAs Reliability Maintainability Template, and Fault Propagation Modeling.

  18. Architectural approaches for HL7-based health information systems implementation.

    PubMed

    López, D M; Blobel, B

    2010-01-01

    Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.

  19. Correlation of rocket propulsion fuel properties with chemical composition using comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry followed by partial least squares regression analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kehimkar, Benjamin; Hoggard, Jamin C.; Marney, Luke C.

    There is an increased need to more fully assess and control the composition of kerosene based rocket propulsion fuels, namely RP-1 and RP-2. In particular, it is crucial to be able to make better quantitative connections between the following three attributes: (a) fuel performance, (b) fuel properties (flash point, density, kinematic viscosity, net heat of combustion, hydrogen content, etc) and (c) the chemical composition of a given fuel (i.e., specific chemical compounds and compound classes present as a result of feedstock blending and processing). Indeed, recent efforts in predicting fuel performance through modeling put greater emphasis on detailed and accuratemore » fuel properties and fuel compositional information. In this regard, advanced distillation curve (ADC) metrology provides improved data relative to classical boiling point and volatility curve techniques. Using ADC metrology, data obtained from RP-1 and RP-2 fuels provides compositional variation information that is directly relevant to predictive modeling of fuel performance. Often, in such studies, one-dimensional gas chromatography (GC) combined with mass spectrometry (MS) is typically employed to provide chemical composition information. Building on approaches using GC-MS, but to glean substantially more chemical composition information from these complex fuels, we have recently studied the use of comprehensive two dimensional gas chromatography combined with time-of-flight mass spectrometry (GC × GC - TOFMS) to provide chemical composition data that is significantly richer than that provided by GC-MS methods. In this report, by applying multivariate data analysis techniques, referred to as chemometrics, we are able to readily model (correlate) the chemical compositional information from RP-1 and RP-2 fuels provided using GC × GC - TOFMS, to the fuel property information such as that provided by the ADC method and other specification properties. We anticipate that this new chemical analysis strategy will have broad implications for the development of high fidelity composition-property models, leading to an optimized approach to fuel formulation and specification for advanced engine cycles.« less

  20. A data types profile suitable for use with ISO EN 13606.

    PubMed

    Sun, Shanghua; Austin, Tony; Kalra, Dipak

    2012-12-01

    ISO EN 13606 is a five part International Standard specifying how Electronic Healthcare Record (EHR) information should be communicated between different EHR systems and repositories. Part 1 of the standard defines an information model for representing the EHR information itself, including the representation of types of data value. A later International Standard, ISO 21090:2010, defines a comprehensive set of models for data types needed by all health IT systems. This latter standard is vast, and duplicates some of the functions already handled by ISO EN 13606 part 1. A profile (sub-set) of ISO 21090 would therefore be expected to provide EHR system vendors with a more specially tailored set of data types to implement and avoid the risk of providing more than one modelling option for representing the information properties. This paper describes the process and design decisions made for developing a data types profile for EHR interoperability.

  1. Managing data from multiple disciplines, scales, and sites to support synthesis and modeling

    USGS Publications Warehouse

    Olson, R. J.; Briggs, J. M.; Porter, J.H.; Mah, Grant R.; Stafford, S.G.

    1999-01-01

    The synthesis and modeling of ecological processes at multiple spatial and temporal scales involves bringing together and sharing data from numerous sources. This article describes a data and information system model that facilitates assembling, managing, and sharing diverse data from multiple disciplines, scales, and sites to support integrated ecological studies. Cross-site scientific-domain working groups coordinate the development of data associated with their particular scientific working group, including decisions about data requirements, data to be compiled, data formats, derived data products, and schedules across the sites. The Web-based data and information system consists of nodes for each working group plus a central node that provides data access, project information, data query, and other functionality. The approach incorporates scientists and computer experts in the working groups and provides incentives for individuals to submit documented data to the data and information system.

  2. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  3. Shannon information entropy in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Ma, Yu-Gang

    2018-03-01

    The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.

  4. Supply of genetic information--amount, format, and frequency.

    PubMed

    Misztal, I; Lawlor, T J

    1999-05-01

    The volume and complexity of genetic information is increasing because of new traits and better models. New traits may include reproduction, health, and carcass. More comprehensive models include the test day model in dairy cattle or a growth model in beef cattle. More complex models, which may include nonadditive effects such as inbreeding and dominance, also provide additional information. The amount of information per animal may increase drastically if DNA marker typing becomes routine and quantitative trait loci information is utilized. In many industries, evaluations are run more frequently. They result in faster genetic progress and improved management and marketing opportunities but also in extra costs and information overload. Adopting new technology and making some organizational changes can help realize all the added benefits of the improvements to the genetic evaluation systems at an acceptable cost. Continuous genetic evaluation, in which new records are accepted and breeding values are updated continuously, will relieve time pressures. An online mating system with access to both genetic and marketing information can result in mating recommendations customized for each user. Such a system could utilize inbreeding and dominance information that cannot efficiently be accommodated in the current sire summaries or off-line mating programs. The new systems will require a new organizational approach in which the task of scientists and technicians will not be simply running the evaluations but also providing the research, design, supervision, and maintenance required in the entire system of evaluation, decision making, and distribution.

  5. New Rodent Population Models May Inform Human Health Risk Assessment and Identification of Genetic Susceptibility to Environmental Exposures.

    PubMed

    Harrill, Alison H; McAllister, Kimberly A

    2017-08-15

    This paper provides an introduction for environmental health scientists to emerging population-based rodent resources. Mouse reference populations provide an opportunity to model environmental exposures and gene-environment interactions in human disease and to inform human health risk assessment. This review will describe several mouse populations for toxicity assessment, including older models such as the Mouse Diversity Panel (MDP), and newer models that include the Collaborative Cross (CC) and Diversity Outbred (DO) models. This review will outline the features of the MDP, CC, and DO mouse models and will discuss published case studies investigating the use of these mouse population resources in each step of the risk assessment paradigm. These unique resources have the potential to be powerful tools for generating hypotheses related to gene-environment interplay in human disease, performing controlled exposure studies to understand the differential responses in humans for susceptibility or resistance to environmental exposures, and identifying gene variants that influence sensitivity to toxicity and disease states. These new resources offer substantial advances to classical toxicity testing paradigms by including genetically sensitive individuals that may inform toxicity risks for sensitive subpopulations. Both in vivo and complementary in vitro resources provide platforms with which to reduce uncertainty by providing population-level data around biological variability. https://doi.org/10.1289/EHP1274.

  6. New Rodent Population Models May Inform Human Health Risk Assessment and Identification of Genetic Susceptibility to Environmental Exposures

    PubMed Central

    Harrill, Alison H.

    2017-01-01

    Background: This paper provides an introduction for environmental health scientists to emerging population-based rodent resources. Mouse reference populations provide an opportunity to model environmental exposures and gene–environment interactions in human disease and to inform human health risk assessment. Objectives: This review will describe several mouse populations for toxicity assessment, including older models such as the Mouse Diversity Panel (MDP), and newer models that include the Collaborative Cross (CC) and Diversity Outbred (DO) models. Methods: This review will outline the features of the MDP, CC, and DO mouse models and will discuss published case studies investigating the use of these mouse population resources in each step of the risk assessment paradigm. Discussion: These unique resources have the potential to be powerful tools for generating hypotheses related to gene–environment interplay in human disease, performing controlled exposure studies to understand the differential responses in humans for susceptibility or resistance to environmental exposures, and identifying gene variants that influence sensitivity to toxicity and disease states. Conclusions: These new resources offer substantial advances to classical toxicity testing paradigms by including genetically sensitive individuals that may inform toxicity risks for sensitive subpopulations. Both in vivo and complementary in vitro resources provide platforms with which to reduce uncertainty by providing population-level data around biological variability. https://doi.org/10.1289/EHP1274 PMID:28886592

  7. Animal models of polymicrobial pneumonia

    PubMed Central

    Hraiech, Sami; Papazian, Laurent; Rolain, Jean-Marc; Bregeon, Fabienne

    2015-01-01

    Pneumonia is one of the leading causes of severe and occasionally life-threatening infections. The physiopathology of pneumonia has been extensively studied, providing information for the development of new treatments for this condition. In addition to in vitro research, animal models have been largely used in the field of pneumonia. Several models have been described and have provided a better understanding of pneumonia under different settings and with various pathogens. However, the concept of one pathogen leading to one infection has been challenged, and recent flu epidemics suggest that some pathogens exhibit highly virulent potential. Although “two hits” animal models have been used to study infectious diseases, few of these models have been described in pneumonia. Therefore the aims of this review were to provide an overview of the available literature in this field, to describe well-studied and uncommon pathogen associations, and to summarize the major insights obtained from this information. PMID:26170617

  8. In-Vehicle Information Systems Behavioral Model and Design Support: Final Report

    DOT National Transportation Integrated Search

    2000-02-16

    A great deal of effort went into producing both the model and the prototype software for this contract. The purpose of this final report is not to duplicate the information provided about these and other topics in previous reports. The purpose is to ...

  9. How Can Biologically-Based Modeling of Arsenic Kinetics and Dynamics Inform the Risk Assessment Process? -- ETD

    EPA Science Inventory

    Quantitative biologically-based models describing key events in the continuum from arsenic exposure to the development of adverse health effects provide a framework to integrate information obtained across diverse research areas. For example, genetic polymorphisms in arsenic me...

  10. Semantic World Modelling and Data Management in a 4d Forest Simulation and Information System

    NASA Astrophysics Data System (ADS)

    Roßmann, J.; Hoppen, M.; Bücken, A.

    2013-08-01

    Various types of 3D simulation applications benefit from realistic forest models. They range from flight simulators for entertainment to harvester simulators for training and tree growth simulations for research and planning. Our 4D forest simulation and information system integrates the necessary methods for data extraction, modelling and management. Using modern methods of semantic world modelling, tree data can efficiently be extracted from remote sensing data. The derived forest models contain position, height, crown volume, type and diameter of each tree. This data is modelled using GML-based data models to assure compatibility and exchangeability. A flexible approach for database synchronization is used to manage the data and provide caching, persistence, a central communication hub for change distribution, and a versioning mechanism. Combining various simulation techniques and data versioning, the 4D forest simulation and information system can provide applications with "both directions" of the fourth dimension. Our paper outlines the current state, new developments, and integration of tree extraction, data modelling, and data management. It also shows several applications realized with the system.

  11. 77 FR 64339 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-19

    ... development of a model notice of privacy practices (NPP). Need and Proposed Use of the Information: 45 CFR 164... verifying information, processing and maintaining information, and disclosing and providing information, to...-30D] Agency Information Collection Activities; Submission to OMB for Review and Approval; Public...

  12. Advanced Technology Training System on Motor-Operated Valves

    NASA Technical Reports Server (NTRS)

    Wiederholt, Bradley J.; Widjaja, T. Kiki; Yasutake, Joseph Y.; Isoda, Hachiro

    1993-01-01

    This paper describes how features from the field of Intelligent Tutoring Systems are applied to the Motor-Operated Valve (MOV) Advanced Technology Training System (ATTS). The MOV ATTS is a training system developed at Galaxy Scientific Corporation for the Central Research Institute of Electric Power Industry in Japan and the Electric Power Research Institute in the United States. The MOV ATTS combines traditional computer-based training approaches with system simulation, integrated expert systems, and student and expert modeling. The primary goal of the MOV ATTS is to reduce human errors that occur during MOV overhaul and repair. The MOV ATTS addresses this goal by providing basic operational information of the MOV, simulating MOV operation, providing troubleshooting practice of MOV failures, and tailoring this training to the needs of each individual student. The MOV ATTS integrates multiple expert models (functional and procedural) to provide advice and feedback to students. The integration also provides expert model validation support to developers. Student modeling is supported by two separate student models: one model registers and updates the student's current knowledge of basic MOV information, while another model logs the student's actions and errors during troubleshooting exercises. These two models are used to provide tailored feedback to the student during the MOV course.

  13. Information content of incubation experiments for inverse estimation of pools in the Rothamsted carbon model: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Scharnagl, Benedikt; Vrugt, Jasper A.; Vereecken, Harry; Herbst, Michael

    2010-05-01

    Turnover of soil organic matter is usually described with multi-compartment models. However, a major drawback of these models is that the conceptually defined compartments (or pools) do not necessarily correspond to measurable soil organic carbon (SOC) fractions in real practice. This not only impairs our ability to rigorously evaluate SOC models but also makes it difficult to derive accurate initial states. In this study, we tested the usefulness and applicability of inverse modeling to derive the various carbon pool sizes in the Rothamsted carbon model (ROTHC) using a synthetic time series of mineralization rates from laboratory incubation. To appropriately account for data and model uncertainty we considered a Bayesian approach using the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. This Markov chain Monte Carlo scheme derives the posterior probability density distribution of the initial pool sizes at the start of incubation from observed mineralization rates. We used the Kullback-Leibler divergence to quantify the information contained in the data and to illustrate the effect of increasing incubation times on the reliability of the pool size estimates. Our results show that measured mineralization rates generally provide sufficient information to reliably estimate the sizes of all active pools in the ROTHC model. However, with about 900 days of incubation, these experiments are excessively long. The use of prior information on microbial biomass provided a way forward to significantly reduce uncertainty and required duration of incubation to about 600 days. Explicit consideration of model parameter uncertainty in the estimation process further impaired the identifiability of initial pools, especially for the more slowly decomposing pools. Our illustrative case studies show how Bayesian inverse modeling can be used to provide important insights into the information content of incubation experiments. Moreover, the outcome of this virtual experiment helps to explain the results of related real-world studies on SOC dynamics.

  14. Common world model for unmanned systems

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.

    2013-05-01

    The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.

  15. An on-demand provision model for geospatial multisource information with active self-adaption services

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Li, Huan

    2015-12-01

    Location-related data are playing an increasingly irreplaceable role in business, government and scientific research. At the same time, the amount and types of data are rapidly increasing. It is a challenge how to quickly find required information from this rapidly growing volume of data, as well as how to efficiently provide different levels of geospatial data to users. This paper puts forward a data-oriented access model for geographic information science data. First, we analyze the features of GIS data including traditional types such as vector and raster data and new types such as Volunteered Geographic Information (VGI). Taking into account these analyses, a classification scheme for geographic data is proposed and TRAFIE is introduced to describe the establishment of a multi-level model for geographic data. Based on this model, a multi-level, scalable access system for geospatial information is put forward. Users can select different levels of data according to their concrete application needs. Pull-based and push-based data access mechanisms based on this model are presented. A Service Oriented Architecture (SOA) was chosen for the data processing. The model of this study has been described by providing decision-making process of government departments with a simulation of fire disaster data collection. The use case shows this data model and the data provision system is flexible and has good adaptability.

  16. Environmental Modeling 101: Training Module

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) uses a variety of models to inform decisions that support its missions, this module provides an introduction to environmental modeling with examples of various models and life-cycles.

  17. Comparison of prognostic and diagnostic approached to modeling evapotranspiration in the Nile river basin

    USDA-ARS?s Scientific Manuscript database

    Actual evapotranspiration (ET) can be estimated using both prognostic and diagnostic modeling approaches, providing independent yet complementary information for hydrologic applications. Both approaches have advantages and disadvantages. When provided with temporally continuous atmospheric forcing d...

  18. Space market model space industry input-output model

    NASA Technical Reports Server (NTRS)

    Hodgin, Robert F.; Marchesini, Roberto

    1987-01-01

    The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.

  19. Services Oriented Smart City Platform Based On 3d City Model Visualization

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Soave, M.; Devigili, F.; Andreolli, M.; De Amicis, R.

    2014-04-01

    The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, is becoming a key factor to trigger true user-driven innovation. However to fully develop the Smart City concept to a wide geographical target, it is required an infrastructure that allows the integration of heterogeneous geographical information and sensor networks into a common technological ground. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The work presented in this paper describes an innovative Services Oriented Architecture software platform aimed at providing smartcities services on top of 3D urban models. 3D city models are the basis of many applications and can became the platform for integrating city information within the Smart-Cites context. In particular the paper will investigate how the efficient visualisation of 3D city models using different levels of detail (LODs) is one of the pivotal technological challenge to support Smart-Cities applications. The goal is to provide to the final user realistic and abstract 3D representations of the urban environment and the possibility to interact with a massive amounts of semantic information contained into the geospatial 3D city model. The proposed solution, using OCG standards and a custom service to provide 3D city models, lets the users to consume the services and interact with the 3D model via Web in a more effective way.

  20. US/Canada wheat and barley crop calender exploratory experiment implementation plan

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A plan is detailed for a supplemental experiment to evaluate several crop growth stage models and crop starter models. The objective of this experiment is to provide timely information to aid in understanding crop calendars and to provide data that will allow a selection between current crop calendar models.

  1. Quantification of terrestrial ecosystem carbon dynamics in the conterminous United States combining a process-based biogeochemical model and MODIS and AmeriFlux data

    USDA-ARS?s Scientific Manuscript database

    Satellite remote sensing provides continuous temporal and spatial information of terrestrial ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical models, such as the Terrestrial Ecosystem Model (TEM), should provide a more adequate quantification of carbon dynami...

  2. Digital Avionics Information System (DAIS): Reliability and Maintainability Model Users Guide. Final Report, May 1975-July 1977.

    ERIC Educational Resources Information Center

    Czuchry, Andrew J.; And Others

    This report provides a complete guide to the stand alone mode operation of the reliability and maintenance (R&M) model, which was developed to facilitate the performance of design versus cost trade-offs within the digital avionics information system (DAIS) acquisition process. The features and structure of the model, its input data…

  3. Temporal Stability, Correlates, and Longitudinal Outcomes of Career Indecision Factors

    ERIC Educational Resources Information Center

    Nauta, Margaret M.

    2012-01-01

    A confirmatory factor analysis (CFA) tested the fit of Kelly and Lee's six-factor model of career decision problems among 188 college students. The six-factor model did not fit the data well, but a five-factor (Lack of Information, Need for Information, Trait Indecision, Disagreement with Others, and Choice Anxiety) model did provide a good fit.…

  4. Informal cash payments for birth in Hungary: Are women paying to secure a known provider, respect, or quality of care?

    PubMed

    Baji, Petra; Rubashkin, Nicholas; Szebik, Imre; Stoll, Kathrin; Vedam, Saraswathi

    2017-09-01

    In Central and Eastern Europe, many women make informal cash payments to ensure continuity of provider, i.e., to have a "chosen" doctor who provided their prenatal care, be present for birth. High rates of obstetric interventions and disrespectful maternity care are also common to the region. No previous study has examined the associations among informal payments, intervention rates, and quality of maternity care. We distributed an online cross-sectional survey in 2014 to a nationally representative sample of Hungarian internet-using women (N = 600) who had given birth in the last 5 years. The survey included items related to socio-demographics, type of provider, obstetric interventions, and experiences of care. Women reported if they paid informally, and how much. We built a two-part model, where a bivariate probit model was used to estimate conditional probabilities of women paying informally, and a GLM model to explore the amount of payments. We calculated marginal effects of the covariates (provider choice, interventions, respectful care). Many more women (79%) with a chosen doctor paid informally (191 euros on average) compared to 17% of women without a chosen doctor (86 euros). Based on regression analysis, the chosen doctor's presence at birth was the principal determinant of payment. Intervention and procedure rates were significantly higher for women with a chosen doctor versus without (cesareans 45% vs. 33%; inductions 32% vs. 19%; episiotomy 75% vs. 62%; epidural 13% vs. 5%), but had no direct effect on payments. Half of the sample (42% with a chosen doctor, 62% without) reported some form of disrespectful care, but this did not reduce payments. Despite reporting disrespect and higher rates of interventions, women rewarded the presence of a chosen doctor with informal payments. They may be unaware of evidence-based standards, and trust that their chosen doctor provided high quality maternity care. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Applying Model Analysis to a Resource-Based Analysis of the Force and Motion Conceptual Evaluation

    ERIC Educational Resources Information Center

    Smith, Trevor I.; Wittmann, Michael C.; Carter, Tom

    2014-01-01

    Previously, we analyzed the Force and Motion Conceptual Evaluation in terms of a resources-based model that allows for clustering of questions so as to provide useful information on how students correctly or incorrectly reason about physics. In this paper, we apply model analysis to show that the associated model plots provide more information…

  6. Incident Management in Academic Information System using ITIL Framework

    NASA Astrophysics Data System (ADS)

    Palilingan, V. R.; Batmetan, J. R.

    2018-02-01

    Incident management is very important in order to ensure the continuity of a system. Information systems require incident management to ensure information systems can provide maximum service according to the service provided. Many of the problems that arise in academic information systems come from incidents that are not properly handled. The objective of this study aims to find the appropriate way of incident management. The incident can be managed so it will not be a big problem. This research uses the ITIL framework to solve incident problems. The technique used in this study is a technique adopted and developed from the service operations section of the ITIL framework. The results of this research found that 84.5% of incidents appearing in academic information systems can be handled quickly and appropriately. 15.5% incidents can be escalated so as to not cause any new problems. The model of incident management applied to make academic information system can run quickly in providing academic service in a good and efficient. The incident management model implemented in this research is able to manage resources appropriately so as to quickly and easily manage incidents.

  7. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  8. About the Nutrient Model | ECHO | US EPA

    EPA Pesticide Factsheets

    ECHO, Enforcement and Compliance History Online, provides compliance and enforcement information for approximately 800,000 EPA-regulated facilities nationwide. ECHO includes permit, inspection, violation, enforcement action, and penalty information about facilities regulated under the Clean Air Act (CAA) Stationary Source Program, Clean Water Act (CWA) National Pollutant Elimination Discharge System (NPDES), and/or Resource Conservation and Recovery Act (RCRA). Information also is provided on surrounding demographics when available.

  9. Monitoring is not enough: on the need for a model-based approach to migratory bird management

    USGS Publications Warehouse

    Nichols, J.D.; Bonney, Rick; Pashley, David N.; Cooper, Robert; Niles, Larry

    2000-01-01

    Informed management requires information about system state and about effects of potential management actions on system state. Population monitoring can provide the needed information about system state, as well as information that can be used to investigate effects of management actions. Three methods for investigating effects of management on bird populations are (1) retrospective analysis, (2) formal experimentation and constrained-design studies, and (3) adaptive management. Retrospective analyses provide weak inferences, regardless of the quality of the monitoring data. The active use of monitoring data in experimental or constrained-design studies or in adaptive management is recommended. Under both approaches, learning occurs via the comparison of estimates from the monitoring program with predictions from competing management models.

  10. 17 CFR Appendix A to Part 160 - Model Privacy Form

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... rates and payments; retirement assets; checking account information; employment information; wire... additional or different information, such as a random opt-out number or a truncated account number, to... retirement earnings; apply for financing; apply for a lease; provide account information; give us your...

  11. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  12. Information Adequacy and Communication Relationships: An Empirical Examination of 18 Organizations.

    ERIC Educational Resources Information Center

    Spiker, Barry K.; Daniels, Tom D.

    1981-01-01

    Noted that satisfaction with organizational life may be a function of equivocality (uncertainty) reduction rather than participation. Tested the hypothesis that perceived information adequacy is an indicator of equivocality. Results suggest that the equivocality model provides a better explanation of satisfaction than the participation model. (PD)

  13. A Supervisory View of Unit Effectiveness. Technical Report.

    ERIC Educational Resources Information Center

    Weitzel, William; And Others

    First-level supervisors from a cross section of business and industrial organizations provided evaluative and descriptive information about the immediate work group which each supervised. From this information, a model was built depicting first-level supervisory perceptions of behaviors which lead to work unit effectiveness. This model was…

  14. An Integrated Model of Emotion Processes and Cognition in Social Information Processing.

    ERIC Educational Resources Information Center

    Lemerise, Elizabeth A.; Arsenio, William F.

    2000-01-01

    Interprets literature on contributions of social cognitive and emotion processes to children's social competence in the context of an integrated model of emotion processes and cognition in social information processing. Provides neurophysiological and functional evidence for the centrality of emotion processes in personal-social decision making.…

  15. Aerodynamic and Hydrodynamic Tests of a Family of Models of Flying Hulls Derived from a Streamline Body -- NACA Model 84 Series

    NASA Technical Reports Server (NTRS)

    Parkinson, John B; Olson, Roland E; Draley, Eugene C; Luoma, Arvo A

    1943-01-01

    A series of related forms of flying-boat hulls representing various degrees of compromise between aerodynamic and hydrodynamic requirements was tested in Langley Tank No. 1 and in the Langley 8-foot high-speed tunnel. The purpose of the investigation was to provide information regarding the penalties in water performance resulting from further aerodynamic refinement and, as a corollary, to provide information regarding the penalties in range or payload resulting from the retention of certain desirable hydrodynamic characteristics. The information should form a basis for over-all improvements in hull form.

  16. Comparison of Prognostic and Diagnostic Approaches to Modeling Evapotranspiration in the Nile River Basin

    NASA Astrophysics Data System (ADS)

    Yilmaz, M.; Anderson, M. C.; Zaitchik, B. F.; Crow, W. T.; Hain, C.; Ozdogan, M.; Chun, J. A.

    2012-12-01

    Actual evapotranspiration (ET) can be estimated using both prognostic and diagnostic modeling approaches, providing independent yet complementary information for hydrologic applications. Both approaches have advantages and disadvantages. When provided with temporally continuous atmospheric forcing data, prognostic models offer continuous sub-daily ET information together with the full set of water and energy balance fluxes and states (i.e. soil moisture, runoff, sensible and latent heat). On the other hand, the diagnostic modeling approach provides ET estimates over regions where reliable information about available soil water is not known (e.g., due to irrigation practices or shallow ground water levels not included in the prognostic model structure, unknown soil texture or plant rooting depth, etc). Prognostic model-based ET estimates are of great interest whenever consistent and complete water budget information is required or when there is a need to project ET for climate or land use change scenarios. Diagnostic models establish a stronger link to remote sensing observations, can be applied in regions with limited or questionable atmospheric forcing data, and provide valuable observation-derived information about the current land-surface state. Analysis of independently obtained ET estimates is particularly important in data poor regions. Such comparisons can help to reduce the uncertainty in the modeled ET estimates and to exclude outliers based on physical considerations. The Nile river basin is home to tens of millions of people whose daily life depends on water extracted from the river Nile. Yet the complete basin scale water balance of the Nile has been studied only a few times, and the temporal and the spatial distribution of hydrological fluxes (particularly ET) are still a subject of active research. This is due in part to a scarcity of ground-based station data for validation. In such regions, comparison between prognostic and diagnostic model output may be a valuable model evaluation tool. Motivated by the complementary information that exists in prognostic and diagnostic energy balance modeling, as well as the need for evaluation of water consumption estimates over the Nile basin, the purpose of this study is to 1) better describe the conceptual differences between prognostic and diagnostic modeling, 2) present the potential for diagnostic models to capture important hydrologic features that are not explicitly represented in prognostic model, 3) explore the differences in these two approaches over the Nile Basin, where ground data are sparse and transnational data sharing is unreliable. More specifically, we will compare output from the Noah prognostic model and the Atmosphere-Land Exchange Inverse (ALEXI) diagnostic model generated over ground truth data-poor Nile basin. Preliminary results indicate spatially, temporally, and magnitude wise consistent flux estimates for ALEXI and NOAH over irrigated Delta region, while there are differences over river-fed wetlands.

  17. Creating Better Library Information Systems: The Road to FRBR-Land

    ERIC Educational Resources Information Center

    Mercun, Tanja; Švab, Katarina; Harej, Viktor; Žumer, Maja

    2013-01-01

    Introduction: To provide valuable services in the future, libraries will need to create better information systems and set up an infrastructure more in line with the current technologies. The "Functional Requirements for Bibliographic Records" conceptual model provides a basis for this transformation, but there are still a number of…

  18. A Critical Analysis of the Child and Adolescent Wellness Scale (CAWS)

    ERIC Educational Resources Information Center

    Weller-Clarke, Alandra

    2006-01-01

    Current practice for assessing children and adolescents rely on objectively scored deficit-based models and/or informal assessments to determine how maladaptive behaviors affect performance. Social-emotional assessment instruments are used in schools and typically provide information related to behavioral and emotional deficits, but provide little…

  19. Mouse Genome Informatics (MGI) Is the International Resource for Information on the Laboratory Mouse.

    PubMed

    Law, MeiYee; Shaw, David R

    2018-01-01

    Mouse Genome Informatics (MGI, http://www.informatics.jax.org/ ) web resources provide free access to meticulously curated information about the laboratory mouse. MGI's primary goal is to help researchers investigate the genetic foundations of human diseases by translating information from mouse phenotypes and disease models studies to human systems. MGI provides comprehensive phenotypes for over 50,000 mutant alleles in mice and provides experimental model descriptions for over 1500 human diseases. Curated data from scientific publications are integrated with those from high-throughput phenotyping and gene expression centers. Data are standardized using defined, hierarchical vocabularies such as the Mammalian Phenotype (MP) Ontology, Mouse Developmental Anatomy and the Gene Ontologies (GO). This chapter introduces you to Gene and Allele Detail pages and provides step-by-step instructions for simple searches and those that take advantage of the breadth of MGI data integration.

  20. THE ART OF DATA MINING THE MINEFIELDS OF TOXICITY ...

    EPA Pesticide Factsheets

    Toxicity databases have a special role in predictive toxicology, providing ready access to historical information throughout the workflow of discovery, development, and product safety processes in drug development as well as in review by regulatory agencies. To provide accurate information within a hypothesesbuilding environment, the content of the databases needs to be rigorously modeled using standards and controlled vocabulary. The utilitarian purposes of databases widely vary, ranging from a source for (Q)SAR datasets for modelers to a basis for

  1. Data Assimilation to Extract Soil Moisture Information From SMAP Observations

    NASA Technical Reports Server (NTRS)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-01-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, they can be used to reduce the need for localized bias correction techniques typically implemented in data assimilation (DA) systems that tend to remove some of the independent information provided by satellite observations. Here, we use a statistical neural network (NN) algorithm to retrieve SMAP (Soil Moisture Active Passive) surface soil moisture estimates in the climatology of the NASA Catchment land surface model. Assimilating these estimates without additional bias correction is found to significantly reduce the model error and increase the temporal correlation against SMAP CalVal in situ observations over the contiguous United States. A comparison with assimilation experiments using traditional bias correction techniques shows that the NN approach better retains the independent information provided by the SMAP observations and thus leads to larger model skill improvements during the assimilation. A comparison with the SMAP Level 4 product shows that the NN approach is able to provide comparable skill improvements and thus represents a viable assimilation approach.

  2. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  3. Multi-model inference for incorporating trophic and climate uncertainty into stock assessments

    NASA Astrophysics Data System (ADS)

    Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim

    2016-12-01

    Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.

  4. An Evaluation of Understandability of Patient Journey Models in Mental Health.

    PubMed

    Percival, Jennifer; McGregor, Carolyn

    2016-07-28

    There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers.

  5. The use of geospatial web services for exchanging utilities data

    NASA Astrophysics Data System (ADS)

    Kuczyńska, Joanna

    2013-04-01

    Geographic information technologies and related geo-information systems currently play an important role in the management of public administration in Poland. One of these tasks is to maintain and update Geodetic Evidence of Public Utilities (GESUT), part of the National Geodetic and Cartographic Resource, which contains an important for many institutions information of technical infrastructure. It requires an active exchange of data between the Geodesy and Cartography Documentation Centers and institutions, which administrate transmission lines. The administrator of public utilities, is legally obliged to provide information about utilities to GESUT. The aim of the research work was to develop a universal data exchange methodology, which can be implemented on a variety of hardware and software platforms. This methodology use Unified Modeling Language (UML), eXtensible Markup Language (XML), and Geography Markup Language (GML). The proposed methodology is based on the two different strategies: Model Driven Architecture (MDA) and Service Oriented Architecture (SOA). Used solutions are consistent with the INSPIRE Directive and ISO 19100 series standards for geographic information. On the basis of analysis of the input data structures, conceptual models were built for both databases. Models were written in the universal modeling language: UML. Combined model that defines a common data structure was also built. This model was transformed into developed for the exchange of geographic information GML standard. The structure of the document describing the data that may be exchanged is defined in the .xsd file. Network services were selected and implemented in the system designed for data exchange based on open source tools. Methodology was implemented and tested. Data in the agreed data structure and metadata were set up on the server. Data access was provided by geospatial network services: data searching possibilities by Catalog Service for the Web (CSW), data collection by Web Feature Service (WFS). WFS provides also operation for modification data, for example to update them by utility administrator. The proposed solution significantly increases the efficiency of data exchange and facilitates maintenance the National Geodetic and Cartographic Resource.

  6. An Object-Based Approach to Evaluation of Climate Variability Projections and Predictions

    NASA Astrophysics Data System (ADS)

    Ammann, C. M.; Brown, B.; Kalb, C. P.; Bullock, R.

    2017-12-01

    Evaluations of the performance of earth system model predictions and projections are of critical importance to enhance usefulness of these products. Such evaluations need to address specific concerns depending on the system and decisions of interest; hence, evaluation tools must be tailored to inform about specific issues. Traditional approaches that summarize grid-based comparisons of analyses and models, or between current and future climate, often do not reveal important information about the models' performance (e.g., spatial or temporal displacements; the reason behind a poor score) and are unable to accommodate these specific information needs. For example, summary statistics such as the correlation coefficient or the mean-squared error provide minimal information to developers, users, and decision makers regarding what is "right" and "wrong" with a model. New spatial and temporal-spatial object-based tools from the field of weather forecast verification (where comparisons typically focus on much finer temporal and spatial scales) have been adapted to more completely answer some of the important earth system model evaluation questions. In particular, the Method for Object-based Diagnostic Evaluation (MODE) tool and its temporal (three-dimensional) extension (MODE-TD) have been adapted for these evaluations. More specifically, these tools can be used to address spatial and temporal displacements in projections of El Nino-related precipitation and/or temperature anomalies, ITCZ-associated precipitation areas, atmospheric rivers, seasonal sea-ice extent, and other features of interest. Examples of several applications of these tools in a climate context will be presented, using output of the CESM large ensemble. In general, these tools provide diagnostic information about model performance - accounting for spatial, temporal, and intensity differences - that cannot be achieved using traditional (scalar) model comparison approaches. Thus, they can provide more meaningful information that can be used in decision-making and planning. Future extensions and applications of these tools in a climate context will be considered.

  7. An ontology model for nursing narratives with natural language generation technology.

    PubMed

    Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung

    2013-01-01

    The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.

  8. Data Model Management for Space Information Systems

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris

    2006-01-01

    The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool. We will also describe the current effort to provide interoperability with the European Space Agency (ESA)/Planetary Science Archive (PSA) which is critically dependent on a common data model.

  9. ICIS Model

    EPA Pesticide Factsheets

    The Integrated Compliance Information System (ICIS) is a web-based system that provides information for the federal enforcement and compliance (FE&C) and the National Pollutant Discharge Elimination System (NPDES) programs.

  10. "When information is not enough": A model for understanding BRCA-positive previvors' information needs regarding hereditary breast and ovarian cancer risk.

    PubMed

    Dean, Marleah; Scherr, Courtney L; Clements, Meredith; Koruo, Rachel; Martinez, Jennifer; Ross, Amy

    2017-09-01

    To investigate BRCA-positive, unaffected patients' - referred to as previvors - information needs after testing positive for a deleterious BRCA genetic mutation. 25 qualitative interviews were conducted with previvors. Data were analyzed using the constant comparison method of grounded theory. Analysis revealed a theoretical model of previvors' information needs related to the stage of their health journey. Specifically, a four-stage model was developed based on the data: (1) pre-testing information needs, (2) post-testing information needs, (3) pre-management information needs, and (4) post-management information needs. Two recurring dimensions of desired knowledge also emerged within the stages-personal/social knowledge and medical knowledge. While previvors may be genetically predisposed to develop cancer, they have not been diagnosed with cancer, and therefore have different information needs than cancer patients and cancer survivors. This model can serve as a framework for assisting healthcare providers in meeting the specific information needs of cancer previvors. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Spilling over strain between elders and their caregivers in Hong Kong.

    PubMed

    Cheung, Chau-Kiu; Chow, Esther Oi-wah

    2006-01-01

    According to the dialectical model, the well-being of the older care recipient, the informal caregiver, and the professional care provider mutually affect each other. Particularly, the caregiver's strain can affect the care recipient's well-being both positively and negatively. Moreover, the task-specific model suggests that as social workers are responsible for maintaining elders' well-being, the workers' strain would be particularly influential on the elders' well-being. To clarify these dialectic relationships, the present study surveyed the three parties involved in home help or home care services in Hong Kong over two successive waves using a panel design. This study reveals the significant negative effect the professional care provider's earlier strain has on the elder's later well-being. Moreover, the social worker's earlier strain was particularly detrimental to the elder's later well-being. In contrast, the effect of the informal caregiver's earlier strain was not significant. Additionally, the elder's well-being had no significant impact on the strain of either the professional care provider or the informal caregiver. Findings of this study support the qualification of the dialectical model by the task-specific model to yield a model of channeled spillover. Accordingly, dialectical influence requires a channel to materialize the spillover effect.

  12. Using Model-Based System Engineering to Provide Artifacts for NASA Project Life-Cycle and Technical Reviews Presentation

    NASA Technical Reports Server (NTRS)

    Parrott, Edith L.; Weiland, Karen J.

    2017-01-01

    This is the presentation for the AIAA Space conference in September 2017. It highlights key information from Using Model-Based Systems Engineering to Provide Artifacts for NASA Project Life-cycle and Technical Reviews paper.

  13. Enhancing Transparency and Control When Drawing Data-Driven Inferences About Individuals.

    PubMed

    Chen, Daizhuo; Fraiberger, Samuel P; Moakler, Robert; Provost, Foster

    2017-09-01

    Recent studies show the remarkable power of fine-grained information disclosed by users on social network sites to infer users' personal characteristics via predictive modeling. Similar fine-grained data are being used successfully in other commercial applications. In response, attention is turning increasingly to the transparency that organizations provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. In this article, we focus on inferences about personal characteristics based on information disclosed by users' online actions. As a use case, we explore personal inferences that are made possible from "Likes" on Facebook. We first present a means for providing transparency into the information responsible for inferences drawn by data-driven models. We then introduce the "cloaking device"-a mechanism for users to inhibit the use of particular pieces of information in inference. Using these analytical tools we ask two main questions: (1) How much information must users cloak to significantly affect inferences about their personal traits? We find that usually users must cloak only a small portion of their actions to inhibit inference. We also find that, encouragingly, false-positive inferences are significantly easier to cloak than true-positive inferences. (2) Can firms change their modeling behavior to make cloaking more difficult? The answer is a definitive yes. We demonstrate a simple modeling change that requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make control easier or harder for their users.

  14. Enhancing Transparency and Control When Drawing Data-Driven Inferences About Individuals

    PubMed Central

    Chen, Daizhuo; Fraiberger, Samuel P.; Moakler, Robert; Provost, Foster

    2017-01-01

    Abstract Recent studies show the remarkable power of fine-grained information disclosed by users on social network sites to infer users' personal characteristics via predictive modeling. Similar fine-grained data are being used successfully in other commercial applications. In response, attention is turning increasingly to the transparency that organizations provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. In this article, we focus on inferences about personal characteristics based on information disclosed by users' online actions. As a use case, we explore personal inferences that are made possible from “Likes” on Facebook. We first present a means for providing transparency into the information responsible for inferences drawn by data-driven models. We then introduce the “cloaking device”—a mechanism for users to inhibit the use of particular pieces of information in inference. Using these analytical tools we ask two main questions: (1) How much information must users cloak to significantly affect inferences about their personal traits? We find that usually users must cloak only a small portion of their actions to inhibit inference. We also find that, encouragingly, false-positive inferences are significantly easier to cloak than true-positive inferences. (2) Can firms change their modeling behavior to make cloaking more difficult? The answer is a definitive yes. We demonstrate a simple modeling change that requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make control easier or harder for their users. PMID:28933942

  15. Exploring Bim for Operational Integrated Asset Management - a Preliminary Study Utilising Real-World Infrastructure Data

    NASA Astrophysics Data System (ADS)

    Boyes, G. A.; Ellul, C.; Irwin, D.

    2017-10-01

    The use of 3D information models within collaborative working environments and the practice of Building Information Modelling (BIM) are becoming more commonplace within infrastructure projects. Currently used predominantly during the design and construction phase, the use of BIM is capable in theory of providing the information at handover that will satisfy the Asset Information Requirements (AIRs) of the future Infrastructure Manager (IM). One particular challenge is establishing a link between existing construction-centric information and the asset-centric information needed for future operations. Crossrail, a project to build a new high-frequency railway underneath London, is handling many such challenges as they prepare to handover their digital information to the future operator, in particular the need to provide a two-way link between a federated 3D CAD model and an object-relational Asset Information Management System (AIMS). This paper focusses on the potential for improved Asset Management (AM) by integrating BIM and GIS systems and practices, and makes a preliminary report on how 3D spatial queries can be used to establish a two-way relational link between two information systems (3D geometry and asset lists), as well as the challenges being overcome to transform the data to be suitable for AM.

  16. Information system modeling for biomedical imaging applications

    NASA Astrophysics Data System (ADS)

    Hoo, Kent S., Jr.; Wong, Stephen T. C.

    1999-07-01

    Information system modeling has historically been relegated to a low priority among the designers of information systems. Often times, there is a rush to design and implement hardware and software solutions after only the briefest assessments of the domain requirements. Although this process results in a rapid development cycle, the system usually does not satisfy the needs of the users and the developers are forced to re-program certain aspects of the system. It would be much better to create an accurate model of the system based on the domain needs so that the implementation of the solution satisfies the needs of the users immediately. It would also be advantageous to build extensibility into the model so that updates to the system could be carried out in an organized fashion. The significance of this research is the development of a new formal framework for the construction of a multimedia medical information system. This formal framework is constructed using visual modeling which provides a way of thinking about problems using models organized around real- world ideas. These models provide an abstract way to view complex problems, making them easier for one to understand. The formal framework is the result of an object-oriented analysis and design process that translates the systems requirements and functionality into software models. The usefulness of this information framework is demonstrated with two different applications in epilepsy research and care, i.e., surgical planning of epilepsy and decision threshold determination.

  17. System analysis through bond graph modeling

    NASA Astrophysics Data System (ADS)

    McBride, Robert Thomas

    2005-07-01

    Modeling and simulation form an integral role in the engineering design process. An accurate mathematical description of a system provides the design engineer the flexibility to perform trade studies quickly and accurately to expedite the design process. Most often, the mathematical model of the system contains components of different engineering disciplines. A modeling methodology that can handle these types of systems might be used in an indirect fashion to extract added information from the model. This research examines the ability of a modeling methodology to provide added insight into system analysis and design. The modeling methodology used is bond graph modeling. An investigation into the creation of a bond graph model using the Lagrangian of the system is provided. Upon creation of the bond graph, system analysis is performed. To aid in the system analysis, an object-oriented approach to bond graph modeling is introduced. A framework is provided to simulate the bond graph directly. Through object-oriented simulation of a bond graph, the information contained within the bond graph can be exploited to create a measurement of system efficiency. A definition of system efficiency is given. This measurement of efficiency is used in the design of different controllers of varying architectures. Optimal control of a missile autopilot is discussed within the framework of the calculated system efficiency.

  18. How Adverse Outcome Pathways Can Aid the Development and Use of Computational Prediction Models for Regulatory Toxicology.

    PubMed

    Wittwehr, Clemens; Aladjov, Hristo; Ankley, Gerald; Byrne, Hugh J; de Knecht, Joop; Heinzle, Elmar; Klambauer, Günter; Landesmann, Brigitte; Luijten, Mirjam; MacKay, Cameron; Maxwell, Gavin; Meek, M E Bette; Paini, Alicia; Perkins, Edward; Sobanski, Tomasz; Villeneuve, Dan; Waters, Katrina M; Whelan, Maurice

    2017-02-01

    Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumulated mechanistic understanding. The adverse outcome pathway (AOP) framework provides a systematic approach for organizing knowledge that may support such inference. Likewise, computational models of biological systems at various scales provide another means and platform to integrate current biological understanding to facilitate inference and extrapolation. We argue that the systematic organization of knowledge into AOP frameworks can inform and help direct the design and development of computational prediction models that can further enhance the utility of mechanistic and in silico data for chemical safety assessment. This concept was explored as part of a workshop on AOP-Informed Predictive Modeling Approaches for Regulatory Toxicology held September 24-25, 2015. Examples of AOP-informed model development and its application to the assessment of chemicals for skin sensitization and multiple modes of endocrine disruption are provided. The role of problem formulation, not only as a critical phase of risk assessment, but also as guide for both AOP and complementary model development is described. Finally, a proposal for actively engaging the modeling community in AOP-informed computational model development is made. The contents serve as a vision for how AOPs can be leveraged to facilitate development of computational prediction models needed to support the next generation of chemical safety assessment. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology.

  19. PIMMS tools for capturing metadata about simulations

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Devine, Gerard; Tourte, Gregory; Pascoe, Stephen; Lawrence, Bryan; Barjat, Hannah

    2013-04-01

    PIMMS (Portable Infrastructure for the Metafor Metadata System) provides a method for consistent and comprehensive documentation of modelling activities that enables the sharing of simulation data and model configuration information. The aim of PIMMS is to package the metadata infrastructure developed by Metafor for CMIP5 so that it can be used by climate modelling groups in UK Universities. PIMMS tools capture information about simulations from the design of experiments to the implementation of experiments via simulations that run models. PIMMS uses the Metafor methodology which consists of a Common Information Model (CIM), Controlled Vocabularies (CV) and software tools. PIMMS software tools provide for the creation and consumption of CIM content via a web services infrastructure and portal developed by the ES-DOC community. PIMMS metadata integrates with the ESGF data infrastructure via the mapping of vocabularies onto ESGF facets. There are three paradigms of PIMMS metadata collection: Model Intercomparision Projects (MIPs) where a standard set of questions is asked of all models which perform standard sets of experiments. Disciplinary level metadata collection where a standard set of questions is asked of all models but experiments are specified by users. Bespoke metadata creation where the users define questions about both models and experiments. Examples will be shown of how PIMMS has been configured to suit each of these three paradigms. In each case PIMMS allows users to provide additional metadata beyond that which is asked for in an initial deployment. The primary target for PIMMS is the UK climate modelling community where it is common practice to reuse model configurations from other researchers. This culture of collaboration exists in part because climate models are very complex with many variables that can be modified. Therefore it has become common practice to begin a series of experiments by using another climate model configuration as a starting point. Usually this other configuration is provided by a researcher in the same research group or by a previous collaborator with whom there is an existing scientific relationship. Some efforts have been made at the university department level to create documentation but there is a wide diversity in the scope and purpose of this information. The consistent and comprehensive documentation enabled by PIMMS will enable the wider sharing of climate model data and configuration information. The PIMMS methodology assumes an initial effort to document standard model configurations. Once these descriptions have been created users need only describe the specific way in which their model configuration is different from the standard. Thus the documentation burden on the user is specific to the experiment they are performing and fits easily into the workflow of doing their science. PIMMS metadata is independent of data and as such is ideally suited for documenting model development. PIMMS provides a framework for sharing information about failed model configurations for which data are not kept, the negative results that don't appear in scientific literature. PIMMS is a UK project funded by JISC, The University of Reading, The University of Bristol and STFC.

  20. Optimal control of epidemic information dissemination over networks.

    PubMed

    Chen, Pin-Yu; Cheng, Shin-Ming; Chen, Kwang-Cheng

    2014-12-01

    Information dissemination control is of crucial importance to facilitate reliable and efficient data delivery, especially in networks consisting of time-varying links or heterogeneous links. Since the abstraction of information dissemination much resembles the spread of epidemics, epidemic models are utilized to characterize the collective dynamics of information dissemination over networks. From a systematic point of view, we aim to explore the optimal control policy for information dissemination given that the control capability is a function of its distribution time, which is a more realistic model in many applications. The main contributions of this paper are to provide an analytically tractable model for information dissemination over networks, to solve the optimal control signal distribution time for minimizing the accumulated network cost via dynamic programming, and to establish a parametric plug-in model for information dissemination control. In particular, we evaluate its performance in mobile and generalized social networks as typical examples.

  1. PCS-ICIS Model

    EPA Pesticide Factsheets

    The Integrated Compliance Information System (ICIS) is a web-based system that provides information for the federal enforcement and compliance (FE&C) and the National Pollutant Discharge Elimination System (NPDES) programs.

  2. Open Energy Info (OpenEI) (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2010-12-01

    The Open Energy Information (OpenEI.org) initiative is a free, open-source, knowledge-sharing platform. OpenEI was created to provide access to data, models, tools, and information that accelerate the transition to clean energy systems through informed decisions.

  3. Enriching step-based product information models to support product life-cycle activities

    NASA Astrophysics Data System (ADS)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  4. Investigating consumers' and informal carers' views and preferences for consumer directed care: A discrete choice experiment.

    PubMed

    Kaambwa, Billingsley; Lancsar, Emily; McCaffrey, Nicola; Chen, Gang; Gill, Liz; Cameron, Ian D; Crotty, Maria; Ratcliffe, Julie

    2015-09-01

    Consumer directed care (CDC) is currently being embraced internationally as a means to promote autonomy and choice for consumers (people aged 65 and over) receiving community aged care services (CACSs). CDC involves giving CACS clients (consumers and informal carers of consumers) control over how CACSs are administered. However, CDC models have largely developed in the absence of evidence on clients' views and preferences. We explored CACS clients' preferences for a variety of CDC attributes and identified factors that may influence these preferences and potentially inform improved design of future CDC models. Study participants were clients of CACSs delivered by five Australian providers. Using a discrete choice experiment (DCE) approach undertaken in a group setting between June and December 2013, we investigated the relative importance to CACS consumers and informal (family) carers of gradations relating to six salient features of CDC (choice of service provider(s), budget management, saving unused/unspent funds, choice of support/care worker(s), support-worker flexibility and level of contact with service coordinator). The DCE data were analysed using conditional, mixed and generalised logit regression models, accounting for preference and scale heterogeneity. Mean ages for 117 study participants were 80 years (87 consumers) and 74 years (30 informal carers). All participants preferred a CDC approach that allowed them to: save unused funds from a CACS package for future use; have support workers that were flexible in terms of changing activities within their CACS care plan and; choose the support workers that provide their day-to-day CACSs. The CDC attributes found to be important to both consumers and informal carers receiving CACSs will inform the design of future CDC models of service delivery. The DCE approach used in this study has the potential for wide applicability and facilitates the assessment of preferences for elements of potential future aged care service delivery not yet available in policy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  6. A new approach to optimal selection of services in health care organizations.

    PubMed

    Adolphson, D L; Baird, M L; Lawrence, K D

    1991-01-01

    A new reimbursement policy adopted by Medicare in 1983 caused financial difficulties for many hospitals and health care organizations. Several organizations responded to these difficulties by developing systems to carefully measure their costs of providing services. The purpose of such systems was to provide relevant information about the profitability of hospital services. This paper presents a new method of making hospital service selection decisions: it is based on an optimization model that avoids arbitrary cost allocations as a basis for computing the costs of offering a given service. The new method provides more reliable information about which services are profitable or unprofitable, and it provides an accurate measure of the degree to which a service is profitable or unprofitable. The new method also provides useful information about the sensitivity of the optimal decision to changes in costs and revenues. Specialized algorithms for the optimization model lead to very efficient implementation of the method, even for the largest health care organizations.

  7. IDSE Version 1 User's Manual

    NASA Technical Reports Server (NTRS)

    Mayer, Richard

    1988-01-01

    The integrated development support environment (IDSE) is a suite of integrated software tools that provide intelligent support for information modelling. These tools assist in function, information, and process modeling. Additional tools exist to assist in gathering and analyzing information to be modeled. This is a user's guide to application of the IDSE. Sections covering the requirements and design of each of the tools are presented. There are currently three integrated computer aided manufacturing definition (IDEF) modeling methodologies: IDEF0, IDEF1, and IDEF2. Also, four appendices exist to describe hardware and software requirements, installation procedures, and basic hardware usage.

  8. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    NASA Astrophysics Data System (ADS)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  9. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  10. A self-scaling, distributed information architecture for public health, research, and clinical care.

    PubMed

    McMurry, Andrew J; Gilbert, Clint A; Reis, Ben Y; Chueh, Henry C; Kohane, Isaac S; Mandl, Kenneth D

    2007-01-01

    This study sought to define a scalable architecture to support the National Health Information Network (NHIN). This architecture must concurrently support a wide range of public health, research, and clinical care activities. The architecture fulfils five desiderata: (1) adopt a distributed approach to data storage to protect privacy, (2) enable strong institutional autonomy to engender participation, (3) provide oversight and transparency to ensure patient trust, (4) allow variable levels of access according to investigator needs and institutional policies, (5) define a self-scaling architecture that encourages voluntary regional collaborations that coalesce to form a nationwide network. Our model has been validated by a large-scale, multi-institution study involving seven medical centers for cancer research. It is the basis of one of four open architectures developed under funding from the Office of the National Coordinator of Health Information Technology, fulfilling the biosurveillance use case defined by the American Health Information Community. The model supports broad applicability for regional and national clinical information exchanges. This model shows the feasibility of an architecture wherein the requirements of care providers, investigators, and public health authorities are served by a distributed model that grants autonomy, protects privacy, and promotes participation.

  11. A Comparison of the One-, the Modified Three-, and the Three-Parameter Item Response Theory Models in the Test Development Item Selection Process.

    ERIC Educational Resources Information Center

    Eignor, Daniel R.; Douglass, James B.

    This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…

  12. A test of an expert-based bird-habitat relationship model in South Carolina

    Treesearch

    John C. Kilgo; David L. Gartner; Brian R. Chapman; John B. Dunnin; Kathleen E. Franzreb; Sidney A. Gauthreaux; Cathryn H. Greenberg; Douglas J. Levey; Karl V. Miller; Scott F. Pearson

    2002-01-01

    Wildlife-habitat relationships models are used widely by land managers to provide information on which species are likely to occur in an area of interest and may be impacted by a proposed management activity. Few such models have been tested. We used recent avian census data from the Savannah River Site, South Carolina to validate BIRDHAB, a geographic information...

  13. [Study on Information Extraction of Clinic Expert Information from Hospital Portals].

    PubMed

    Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li

    2015-12-01

    Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.

  14. CLIMACS: a computer model of forest stand development for western Oregon and Washington.

    Treesearch

    Virginia H. Dale; Miles Hemstrom

    1984-01-01

    A simulation model for the development of timber stands in the Pacific Northwest is described. The model grows individual trees of 21 species in a 0.20-hectare (0.08-acre) forest gap. The model provides a means of assimilating existing information, indicates where knowledge is deficient, suggests where the forest system is most sensitive, and provides a first testing...

  15. Nutrient Modeling (Hypoxia Task Force Search) | ECHO | US ...

    EPA Pesticide Factsheets

    ECHO, Enforcement and Compliance History Online, provides compliance and enforcement information for approximately 800,000 EPA-regulated facilities nationwide. ECHO includes permit, inspection, violation, enforcement action, and penalty information about facilities regulated under the Clean Air Act (CAA) Stationary Source Program, Clean Water Act (CWA) National Pollutant Elimination Discharge System (NPDES), and/or Resource Conservation and Recovery Act (RCRA). Information also is provided on surrounding demographics when available.

  16. A preliminary geodetic data model for geographic information systems

    NASA Astrophysics Data System (ADS)

    Kelly, K. M.

    2009-12-01

    Our ability to gather and assimilate integrated data collections from multiple disciplines is important for earth system studies. Moreover, geosciences data collection has increased dramatically, with pervasive networks of observational stations on the ground, in the oceans, in the atmosphere and in space. Contemporary geodetic observations from several space and terrestrial technologies contribute to our knowledge of earth system processes and thus are a valuable source of high accuracy information for many global change studies. Assimilation of these geodetic observations and numerical models into models of weather, climate, oceans, hydrology, ice, and solid Earth processes is an important contribution geodesists can make to the earth science community. Clearly, the geodetic observations and models are fundamental to these contributions. ESRI wishes to provide leadership in the geodetic community to collaboratively build an open, freely available content specification that can be used by anyone to structure and manage geodetic data. This Geodetic Data Model will provide important context for all geographic information. The production of a task-specific geodetic data model involves several steps. The goal of the data model is to provide useful data structures and best practices for each step, making it easier for geodesists to organize their data and metadata in a way that will be useful in their data analyses and to their customers. Built on concepts from the successful Arc Marine data model, we introduce common geodetic data types and summarize the main thematic layers of the Geodetic Data Model. These provide a general framework for envisioning the core feature classes required to represent geodetic data in a geographic information system. Like Arc Marine, the framework is generic to allow users to build workflow or product specific geodetic data models tailored to the specific task(s) at hand. This approach allows integration of the data with other existing geophysical datasets, thus facilitating creation of multi-tiered models. The Geodetic Data Model encourages data assimilation and analysis and facilitates data interoperability, coordination and integration in earth system modeling. It offers a basic set of data structures organized in a simple and homogeneous way and can streamline access to and processing of geodetic data. It can aid knowledge discovery through the use of GIS technology to enable identification and understanding of relationships and provide well-established tools and methods to communicate complex technical knowledge with non-specialist audiences. The Geodetic Data Model comprise the base classes for using workflow driven ontology (WDO) techniques for specifying the computation of complex geodetic products along with the ability to capture provenance information. While we do not specify WDO for any given geodetic product, we recognize that structured geodetic data is essential for generating any geodetic WDO, a task that can be streamlined in some GIS software.

  17. Health care use amongst online buyers of medications and vitamins.

    PubMed

    Desai, Karishma; Chewning, Betty; Mott, David

    2015-01-01

    With increased use of the internet, more people access medications and health supplements online. However little is known about factors associated with using online buying. Given the variable quality of online pharmacies, an important question is whether online consumers also have health care providers with whom they discuss internet information and decisions. To help address these gaps this study used the Andersen Model to explore (1) the characteristics of internet buyers of medicines and/vitamins, (2) the association between health care use and buying medicines and/vitamins online drawing on the Andersen health care utilization framework, and (3) factors predicting discussion of internet information with health providers. The National Cancer Institute's Health Information National Trends Survey (HINTS) 2007 was analyzed to study online medication buying among a national sample of internet users (N = 5074). The Andersen Model of health care utilization guided the study's variable selection and analyses. Buying online and talking about online information are the two main outcome variables. Separate multivariate logistic regression analyses identified factors associated with online buying and factors predicting discussions with providers about online information. In 2007, 14.5% (n = 871) of internet users bought a medication or vitamin online. About 85% of online buyers had a regular provider, but only 39% talked to the provider about online information even though most (93.7%) visited the provider ≥1 times/year. Multivariate analyses found internet health product consumers were more likely to be over 50 years old, have insurance and discuss the internet with their provider than non-internet health product consumers. Moreover, discussion of internet information was more likely if consumers had a regular provider and perceived their communication to be at least fair or good in general. There is a clear association of online buying with age, frequency of visits and discussing online information with a provider. Although most online buyers visited a provider in the prior year, only a minority discussed the internet with them. This suggests a missed opportunity for providers to help patients navigate internet buying, particularly if they are a patient's regular provider and the patient perceives their communication as good. Published by Elsevier Inc.

  18. Implementing Information and Communication Technology to Support Community Aged Care Service Integration: Lessons from an Australian Aged Care Provider.

    PubMed

    Douglas, Heather E; Georgiou, Andrew; Tariq, Amina; Prgomet, Mirela; Warland, Andrew; Armour, Pauline; Westbrook, Johanna I

    2017-04-10

    There is limited evidence of the benefits of information and communication technology (ICT) to support integrated aged care services. We undertook a case study to describe carelink+, a centralised client service management ICT system implemented by a large aged and community care service provider, Uniting. We sought to explicate the care-related information exchange processes associated with carelink+ and identify lessons for organisations attempting to use ICT to support service integration. Our case study included seventeen interviews and eleven observation sessions with a purposive sample of staff within the organisation. Inductive analysis was used to develop a model of ICT-supported information exchange. Management staff described the integrated care model designed to underpin carelink+. Frontline staff described complex information exchange processes supporting coordination of client services. Mismatches between the data quality and the functions carelink+ was designed to support necessitated the evolution of new work processes associated with the system. There is value in explicitly modelling the work processes that emerge as a consequence of ICT. Continuous evaluation of the match between ICT and work processes will help aged care organisations to achieve higher levels of ICT maturity that support their efforts to provide integrated care to clients.

  19. Implementing Information and Communication Technology to Support Community Aged Care Service Integration: Lessons from an Australian Aged Care Provider

    PubMed Central

    Georgiou, Andrew; Tariq, Amina; Prgomet, Mirela; Warland, Andrew; Armour, Pauline; Westbrook, Johanna I

    2017-01-01

    Introduction: There is limited evidence of the benefits of information and communication technology (ICT) to support integrated aged care services. Objectives: We undertook a case study to describe carelink+, a centralised client service management ICT system implemented by a large aged and community care service provider, Uniting. We sought to explicate the care-related information exchange processes associated with carelink+ and identify lessons for organisations attempting to use ICT to support service integration. Methods: Our case study included seventeen interviews and eleven observation sessions with a purposive sample of staff within the organisation. Inductive analysis was used to develop a model of ICT-supported information exchange. Results: Management staff described the integrated care model designed to underpin carelink+. Frontline staff described complex information exchange processes supporting coordination of client services. Mismatches between the data quality and the functions carelink+ was designed to support necessitated the evolution of new work processes associated with the system. Conclusions: There is value in explicitly modelling the work processes that emerge as a consequence of ICT. Continuous evaluation of the match between ICT and work processes will help aged care organisations to achieve higher levels of ICT maturity that support their efforts to provide integrated care to clients. PMID:29042851

  20. CISNET: Resources

    Cancer.gov

    The Publications pages provide lists of all CISNET publications since the inception of CISNET. Publications are listed by Cancer Site or by Research Topic. The Publication Support and Modeling Resources pages provides access to technical modeling information, raw data, and publication extensions stemming from the work of the CISNET consortium.

  1. The Impacts of Information-Sharing Mechanisms on Spatial Market Formation Based on Agent-Based Modeling

    PubMed Central

    Li, Qianqian; Yang, Tao; Zhao, Erbo; Xia, Xing’ang; Han, Zhangang

    2013-01-01

    There has been an increasing interest in the geographic aspects of economic development, exemplified by P. Krugman’s logical analysis. We show in this paper that the geographic aspects of economic development can be modeled using multi-agent systems that incorporate multiple underlying factors. The extent of information sharing is assumed to be a driving force that leads to economic geographic heterogeneity across locations without geographic advantages or disadvantages. We propose an agent-based market model that considers a spectrum of different information-sharing mechanisms: no information sharing, information sharing among friends and pheromone-like information sharing. Finally, we build a unified model that accommodates all three of these information-sharing mechanisms based on the number of friends who can share information. We find that the no information-sharing model does not yield large economic zones, and more information sharing can give rise to a power-law distribution of market size that corresponds to the stylized fact of city size and firm size distributions. The simulations show that this model is robust. This paper provides an alternative approach to studying economic geographic development, and this model could be used as a test bed to validate the detailed assumptions that regulate real economic agglomeration. PMID:23484007

  2. Greater Patient Health Information Control to Improve the Sustainability of Health Information Exchanges.

    PubMed

    Abdelhamid, Mohamed

    2018-06-09

    Health information exchanges (HIEs) are multisided platforms that facilitate the sharing of patient health information (PHI) between providers and payers across organizations within a region, community or hospital system. The benefits of HIEs to payers and providers include lower cost, faster services, and better health outcome. However, most HIEs have configured the patient healthcare consent process to give all providers who sign up with the exchange access to PHI for all consenting patients, leaving no control to patients in customized what information to share and with who. This research investigates the impact of granting greater control to patients in sharing their personal health information on consent rates and making them active participants in the HIEs system. This research utilizes a randomized experimental survey design study. The study uses responses from 388 participants and structural equation modeling (SEM) to test the conceptual model. The main findings of this research include that patients consent rate increases significantly when greater control in sharing PHI is offered to the patient. In addition, greater control reduces the negative impact of privacy concern on the intention to consent. Similarly, trust in healthcare professionals leads to higher consent when greater control is offered to the patient. Thus, greater control empowers the role of trust in engaging patients and sustaining HIEs. The paper makes a theoretical contribution to research by extending the unified theory of acceptance and use of technology (UTAUT) model. The findings impact practice by providing insights that will help sustain HIEs. Copyright © 2018. Published by Elsevier Inc.

  3. Using language models to identify relevant new information in inpatient clinical notes.

    PubMed

    Zhang, Rui; Pakhomov, Serguei V; Lee, Janet T; Melton, Genevieve B

    2014-01-01

    Redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous and may negatively impact the use of these notes by clinicians, and, potentially, the efficiency of patient care delivery. Automated methods to identify redundant versus relevant new information may provide a valuable tool for clinicians to better synthesize patient information and navigate to clinically important details. In this study, we investigated the use of language models for identification of new information in inpatient notes, and evaluated our methods using expert-derived reference standards. The best method achieved precision of 0.743, recall of 0.832 and F1-measure of 0.784. The average proportion of redundant information was similar between inpatient and outpatient progress notes (76.6% (SD=17.3%) and 76.7% (SD=14.0%), respectively). Advanced practice providers tended to have higher rates of redundancy in their notes compared to physicians. Future investigation includes the addition of semantic components and visualization of new information.

  4. Using Language Models to Identify Relevant New Information in Inpatient Clinical Notes

    PubMed Central

    Zhang, Rui; Pakhomov, Serguei V.; Lee, Janet T.; Melton, Genevieve B.

    2014-01-01

    Redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous and may negatively impact the use of these notes by clinicians, and, potentially, the efficiency of patient care delivery. Automated methods to identify redundant versus relevant new information may provide a valuable tool for clinicians to better synthesize patient information and navigate to clinically important details. In this study, we investigated the use of language models for identification of new information in inpatient notes, and evaluated our methods using expert-derived reference standards. The best method achieved precision of 0.743, recall of 0.832 and F1-measure of 0.784. The average proportion of redundant information was similar between inpatient and outpatient progress notes (76.6% (SD=17.3%) and 76.7% (SD=14.0%), respectively). Advanced practice providers tended to have higher rates of redundancy in their notes compared to physicians. Future investigation includes the addition of semantic components and visualization of new information. PMID:25954438

  5. Multilingual Medical Data Models in ODM Format

    PubMed Central

    Breil, B.; Kenneweg, J.; Fritz, F.; Bruland, P.; Doods, D.; Trinczek, B.; Dugas, M.

    2012-01-01

    Background Semantic interoperability between routine healthcare and clinical research is an unsolved issue, as information systems in the healthcare domain still use proprietary and site-specific data models. However, information exchange and data harmonization are essential for physicians and scientists if they want to collect and analyze data from different hospitals in order to build up registries and perform multicenter clinical trials. Consequently, there is a need for a standardized metadata exchange based on common data models. Currently this is mainly done by informatics experts instead of medical experts. Objectives We propose to enable physicians to exchange, rate, comment and discuss their own medical data models in a collaborative web-based repository of medical forms in a standardized format. Methods Based on a comprehensive requirement analysis, a web-based portal for medical data models was specified. In this context, a data model is the technical specification (attributes, data types, value lists) of a medical form without any layout information. The CDISC Operational Data Model (ODM) was chosen as the appropriate format for the standardized representation of data models. The system was implemented with Ruby on Rails and applies web 2.0 technologies to provide a community based solution. Forms from different source systems – both routine care and clinical research – were converted into ODM format and uploaded into the portal. Results A portal for medical data models based on ODM-files was implemented (http://www.medical-data-models.org). Physicians are able to upload, comment, rate and download medical data models. More than 250 forms with approximately 8000 items are provided in different views (overview and detailed presentation) and in multiple languages. For instance, the portal contains forms from clinical and research information systems. Conclusion The portal provides a system-independent repository for multilingual data models in ODM format which can be used by physicians. It serves as a platform for discussion and enables the exchange of multilingual medical data models in a standardized way. PMID:23620720

  6. Inside Information--A Door to the Future?

    ERIC Educational Resources Information Center

    Twining, John

    1986-01-01

    Explains background to development by the British Broadcasting Corporation of Inside Information, a course providing an introduction to information technology for adults, and its linking to the City and Guilds of London Institute's short-course program. Medium and long-term education scenarios are suggested based on the Inside Information model.…

  7. A Model Evaluation Data Set for the Tropical ARM Sites

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    This data set has been derived from various ARM and external data sources with the main aim of providing modelers easy access to quality controlled data for model evaluation. The data set contains highly aggregated (in time) data from a number of sources at the tropical ARM sites at Manus and Nauru. It spans the years of 1999 and 2000. The data set contains information on downward surface radiation; surface meteorology, including precipitation; atmospheric water vapor and cloud liquid water content; hydrometeor cover as a function of height; and cloud cover, cloud optical thickness and cloud top pressure information provided by the International Satellite Cloud Climatology Project (ISCCP).

  8. Joint Applications Pilot of the National Climate Predictions and Projections Platform and the North Central Climate Science Center: Delivering climate projections on regional scales to support adaptation planning

    NASA Astrophysics Data System (ADS)

    Ray, A. J.; Ojima, D. S.; Morisette, J. T.

    2012-12-01

    The DOI North Central Climate Science Center (NC CSC) and the NOAA/NCAR National Climate Predictions and Projections (NCPP) Platform and have initiated a joint pilot study to collaboratively explore the "best available climate information" to support key land management questions and how to provide this information. NCPP's mission is to support state of the art approaches to develop and deliver comprehensive regional climate information and facilitate its use in decision making and adaptation planning. This presentation will describe the evolving joint pilot as a tangible, real-world demonstration of linkages between climate science, ecosystem science and resource management. Our joint pilot is developing a deliberate, ongoing interaction to prototype how NCPP will work with CSCs to develop and deliver needed climate information products, including translational information to support climate data understanding and use. This pilot also will build capacity in the North Central CSC by working with NCPP to use climate information used as input to ecological modeling. We will discuss lessons to date on developing and delivering needed climate information products based on this strategic partnership. Four projects have been funded to collaborate to incorporate climate information as part of an ecological modeling project, which in turn will address key DOI stakeholder priorities in the region: Riparian Corridors: Projecting climate change effects on cottonwood and willow seed dispersal phenology, flood timing, and seedling recruitment in western riparian forests. Sage Grouse & Habitats: Integrating climate and biological data into land management decision models to assess species and habitat vulnerability Grasslands & Forests: Projecting future effects of land management, natural disturbance, and CO2 on woody encroachment in the Northern Great Plains The value of climate information: Supporting management decisions in the Plains and Prairie Potholes LCC. NCCSC's role in these projects is to provide the connections between climate data and running ecological models, and prototype these for future work. NCPP will develop capacities to provide enhanced climate information at relevant spatial and temporal scales, both for historical climate and projections of future climate, and will work to link expert guidance and understanding of modeling processes and evaluation of modeling with the use of numerical climate data. Translational information thus is a suite of information that aids in translation of numerical climate information into usable knowledge for applications, e.g. ecological response models, hydrologic risk studies. This information includes technical and scientific aspects including, but not limited to: 1) results of objective, quantitative evaluation of climate models & downscaling techniques, 2) guidance on appropriate uses and interpretation, i.e., understanding the advantages and limitations of various downscaling techniques for specific user applications, 3) characterizing and interpreting uncertainty, 4) Descriptions meaningful to applications, e.g. narratives. NCPP believes that translational information is best co-developed between climate scientists and applications scientists, such as the NC-CSC pilot.

  9. Librarianship and Public Culture in the Age of Information Capitalism.

    ERIC Educational Resources Information Center

    Blanke, Henry T.

    1996-01-01

    Contends that an entrepreneurial model of librarianship contradicts traditional ideals of free and equal access to information and argues that such a model threatens the future of the library as a vital public sphere of democratic culture. Discusses broad trends of advanced capitalism to provide a context for the critical interpretation of issues…

  10. Using sampling theory as the basis for a conceptual data model

    Treesearch

    Fred C. Martin; Tonya Baggett; Tom Wolfe

    2000-01-01

    Greater demands on forest resources require that larger amounts of information be readily available to decisionmakers. To provide more information faster, databases must be developed that are more comprehensive and easier to use. Data modeling is a process for building more complete and flexible databases by emphasizing fundamental relationships over existing or...

  11. 12 CFR Appendix C to Part 325 - Risk-Based Capital for State Non-Member Banks: Market Risk

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... provide information about the impact of adverse market events on a bank's covered positions. Backtests provide information about the accuracy of an internal model by comparing a bank's daily VAR measures to... determines the bank meets such criteria as a consequence of accounting, operational, or similar...

  12. ICLIS: A Model for Extending Knowledge to Residents in Rural Communities. A Planning Workbook.

    ERIC Educational Resources Information Center

    Utah State Univ., Logan.

    The Intermountain Community Learning and Information Services (ICLIS) project was begun in 1985 to provide rural communities in Colorado, Montana, Utah, and Wyoming with greater access to information and educational programming. Computer centers were housed in nine rural public libraries to provide services related to literacy, career guidance,…

  13. Automated Generation of Tabular Equations of State with Uncertainty Information

    NASA Astrophysics Data System (ADS)

    Carpenter, John H.; Robinson, Allen C.; Debusschere, Bert J.; Mattsson, Ann E.

    2015-06-01

    As computational science pushes toward higher fidelity prediction, understanding the uncertainty associated with closure models, such as the equation of state (EOS), has become a key focus. Traditional EOS development often involves a fair amount of art, where expert modelers may appear as magicians, providing what is felt to be the closest possible representation of the truth. Automation of the development process gives a means by which one may demystify the art of EOS, while simultaneously obtaining uncertainty information in a manner that is both quantifiable and reproducible. We describe our progress on the implementation of such a system to provide tabular EOS tables with uncertainty information to hydrocodes. Key challenges include encoding the artistic expert opinion into an algorithmic form and preserving the analytic models and uncertainty information in a manner that is both accurate and computationally efficient. Results are demonstrated on a multi-phase aluminum model. *Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. Recoverable information and emergent conservation laws in fracton stabilizer codes

    NASA Astrophysics Data System (ADS)

    Schmitz, A. T.; Ma, Han; Nandkishore, Rahul M.; Parameswaran, S. A.

    2018-04-01

    We introduce a new quantity that we term recoverable information, defined for stabilizer Hamiltonians. For such models, the recoverable information provides a measure of the topological information as well as a physical interpretation, which is complementary to topological entanglement entropy. We discuss three different ways to calculate the recoverable information and prove their equivalence. To demonstrate its utility, we compute recoverable information for fracton models using all three methods where appropriate. From the recoverable information, we deduce the existence of emergent Z2 Gauss-law-type constraints, which in turn imply emergent Z2 conservation laws for pointlike quasiparticle excitations of an underlying topologically ordered phase.

  15. Socio-economic exposure to natural disasters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marin, Giovanni, E-mail: giovanni.marin@uniurb.it; IRCrES - CNR, Research Institute on Sustainable Economic Growth, Via Corti 12, 20133 - Milano; SEEDS, Ferrara

    Even though the correct assessment of risks is a key aspect of the risk management analysis, we argue that limited effort has been devoted in the assessment of comprehensive measures of economic exposure at very low scale. For this reason, we aim at providing a series of suitable methodologies to provide a complete and detailed list of the exposure of economic activities to natural disasters. We use Input-Output models to provide information about several socio-economic variables, such as population density, employment density, firms' turnover and capital stock, that can be seen as direct and indirect socio-economic exposure to natural disasters.more » We then provide an application to the Italian context. These measures can be easily incorporated into risk assessment models to provide a clear picture of the disaster risk for local areas. - Highlights: • Ex ante assessment of economic exposure to disasters at very low geographical scale • Assessment of the cost of natural disasters in ex-post perspective • IO model and spatial autocorrelation to get information on socio-economic variables • Indicators supporting risk assessment and risk management models.« less

  16. ICIS Activity Subject Area Model

    EPA Pesticide Factsheets

    The Integrated Compliance Information System (ICIS) is a web-based system that provides information for the federal enforcement and compliance (FE&C) and the National Pollutant Discharge Elimination System (NPDES) programs.

  17. ICIS Contacts Subject Area Model

    EPA Pesticide Factsheets

    The Integrated Compliance Information System (ICIS) is a web-based system that provides information for the federal enforcement and compliance (FE&C) and the National Pollutant Discharge Elimination System (NPDES) programs.

  18. New York State energy-analytic information system: first-stage implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allentuck, J.; Carroll, O.; Fiore, L.

    1979-09-01

    So that energy policy by state government may be formulated within the constraints imposed by policy determined at the national level - yet reflect the diverse interests of its citizens - large quantities of data and sophisticated analytic capabilities are required. This report presents the design of an energy-information/analytic system for New York State, the data for a base year, 1976, and projections of these data. At the county level, 1976 energy-supply demand data and electric generating plant data are provided as well. Data-base management is based on System 2000. Three computerized models provide the system's basic analytic capacity. Themore » Brookhaven Energy System Network Simulator provides an integrating framework while a price-response model and a weather sensitive energy demand model furnished a short-term energy response estimation capability. The operation of these computerized models is described. 62 references, 25 figures, 39 tables.« less

  19. The Swarm Initial Field Model for the 2014 Geomagnetic Field

    NASA Technical Reports Server (NTRS)

    Olsen, Nils; Hulot, Gauthier; Lesur, Vincent; Finlay, Christopher C.; Beggan, Ciaran; Chulliat, Arnaud; Sabaka, Terence J.; Floberghagen, Rune; Friis-Christensen, Eigil; Haagmans, Roger

    2015-01-01

    Data from the first year of ESA's Swarm constellation mission are used to derive the Swarm Initial Field Model (SIFM), a new model of the Earth's magnetic field and its time variation. In addition to the conventional magnetic field observations provided by each of the three Swarm satellites, explicit advantage is taken of the constellation aspect by including east-west magnetic intensity gradient information from the lower satellite pair. Along-track differences in magnetic intensity provide further information concerning the north-south gradient. The SIFM static field shows excellent agreement (up to at least degree 60) with recent field models derived from CHAMP data, providing an initial validation of the quality of the Swarm magnetic measurements. Use of gradient data improves the determination of both the static field and its secular variation, with the mean misfit for east-west intensity differences between the lower satellite pair being only 0.12 nT.

  20. Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.

    PubMed

    Honkela, Antti; Valpola, Harri

    2004-07-01

    The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.

  1. A defocus-information-free autostereoscopic three-dimensional (3D) digital reconstruction method using direct extraction of disparity information (DEDI)

    NASA Astrophysics Data System (ADS)

    Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu

    2016-10-01

    Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.

  2. Comparison of internal wave properties calculated by Boussinesq equations with/without rigid-lid assumption

    NASA Astrophysics Data System (ADS)

    Liu, C. M.

    2017-12-01

    Wave properties predicted by the rigid-lid and the free-surface Boussinesq equations for a two-fluid system are theoretically calculated and compared in this study. Boussinesq model is generally applied to numerically simulate surface waves in coastal regions to provide credible information for disaster prevention and breaker design. As for internal waves, Liu et al. (2008) and Liu (2016) respectively derived a free-surface model and a rigid-lid Boussinesq models for a two-fluid system. The former and the latter models respectively contain four and three key variables which may result in different results and efficiency while simulating. Therefore, present study shows the results theoretically measured by these two models to provide more detailed observation and useful information for motions of internal waves.

  3. Habitat suitability index models: Black crappie

    USGS Publications Warehouse

    Edwards, Elizabeth A.; Krieger, Douglas A.; Bacteller, Mary; Maughan, O. Eugene

    1982-01-01

    Characteristics and habitat requirements of the black crappie (Pomoxis nigromaculatus) are described in a review of Habitat Suitability Index models. This is one in a series of publications to provide information on the habitat requirements of selected fish and wildlife species. Numerous literature sources have been consulted in an effort to consolidate scientific data on species-habitat relationships. These data have subsequently been synthesized into explicit Habitat Suitability Index (HSI) models. The models are based on suitability indices indicating habitat preferences. Indices have been formulated for variables found to affect the life cycle and survival of each species. Habitat Suitability Index (HSI) models are designed to provide information for use in impact assessment and habitat management activities. The HSI technique is a corollary to the U.S. Fish and Wildlife Service's Habitat Evaluation Procedures.

  4. Communicating Ocean Sciences to Informal Audiences (COSIA): Universities, Oceanographic Institutions, Science Centers and Aquariums Working Together to Improve Ocean Education and Public Outreach

    NASA Astrophysics Data System (ADS)

    Glenn, S.; McDonnell, J.; Halversen, C.; Zimmerman, T.; Ingram, L.

    2007-12-01

    Ocean observatories have already demonstrated their ability to maintain long-term time series, capture episodic events, provide context for improved shipboard sampling, and improve accessibility to a broader range of participants. Communicating Ocean Sciences, an already existing college course from COSEE-California has demonstrated its ability to teach future scientists essential communication skills. The NSF-funded Communicating Ocean Sciences to Informal Audiences (COSIA) project has leveraged these experiences and others to demonstrate a long-term model for promoting effective science communication skills and techniques applicable to diverse audiences. The COSIA effort is one of the pathfinders for ensuring that the new scientific results from the increasing U.S. investments in ocean observatories is effectively communicated to the nation, and will serve as a model for other fields. Our presentation will describe a long-term model for promoting effective science communication skills and techniques applicable to diverse audiences. COSIA established partnerships between informal science education institutions and universities nationwide to facilitate quality outreach by scientists and the delivery of rigorous, cutting edge science by informal educators while teaching future scientists (college students) essential communication skills. The COSIA model includes scientist-educator partnerships that develop and deliver a college course that teaches communication skills through the understanding of learning theory specifically related to informal learning environments and the practice of these skills at aquariums and science centers. The goals of COSIA are to: provide a model for establishing substantive, long-term partnerships between scientists and informal science education institutions to meet their respective outreach needs; provide future scientists with experiences delivering outreach and promoting the broader impact of research; and provide diverse role models and inquiry-based ocean sciences activities for children and families visiting informal institutions. The following COSIA partners have taught the course: Hampton University - Virginia Aquarium; Oregon State University - Hatfield Marine Science Visitor's Center; Rutgers University - Liberty Science Center; University of California, Berkeley - Lawrence Hall of Science; University of Southern California - Aquarium of the Pacific; and Scripps Institution of Oceanography - Birch Aquarium. Communicating Ocean Sciences has also been taught at Stanford, Woods Hole Oceanographic Institute, University of Oregon (GK-12 program), University of Washington, and others. Data from surveys of students demonstrates improvement in their understanding of how people learn and how to effectively communicate. Providing college students with a background in current learning theory, and applying that theory through practical science communication experiences, will empower future generations of scientists to meet the communication challenges they will encounter in their careers.

  5. AMEM-ADL Polymer Migration Estimation Model User's Guide

    EPA Pesticide Factsheets

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  6. Sediment-Hosted Zinc-Lead Deposits of the World - Database and Grade and Tonnage Models

    USGS Publications Warehouse

    Singer, Donald A.; Berger, Vladimir I.; Moring, Barry C.

    2009-01-01

    This report provides information on sediment-hosted zinc-lead mineral deposits based on the geologic settings that are observed on regional geologic maps. The foundation of mineral-deposit models is information about known deposits. The purpose of this publication is to make this kind of information available in digital form for sediment-hosted zinc-lead deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments: Grades and tonnages among deposit types are significantly different, and many types occur in different geologic settings that can be identified from geologic maps. Mineral-deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables, or for robust estimation of undiscovered deposits - thus, we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral-deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral-deposit models play the central role in transforming geoscience information to a form useful to policy makers. This publication contains a computer file of information on sediment-hosted zinc-lead deposits from around the world. It also presents new grade and tonnage models for nine types of these deposits and a file allowing locations of all deposits to be plotted in Google Earth. The data are presented in FileMaker Pro, Excel and text files to make the information available to as many as possible. The value of this information and any derived analyses depends critically on the consistent manner of data gathering. For this reason, we first discuss the rules applied in this compilation. Next, the fields of the data file are considered. Finally, we provide new grade and tonnage models that are, for the most part, based on a classification of deposits using observable geologic units from regional-scaled maps.

  7. A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.

    2006-01-01

    The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.

  8. Finding mouse models of human lymphomas and leukemia's using the Jackson laboratory mouse tumor biology database.

    PubMed

    Begley, Dale A; Sundberg, John P; Krupke, Debra M; Neuhauser, Steven B; Bult, Carol J; Eppig, Janan T; Morse, Herbert C; Ward, Jerrold M

    2015-12-01

    Many mouse models have been created to study hematopoietic cancer types. There are over thirty hematopoietic tumor types and subtypes, both human and mouse, with various origins, characteristics and clinical prognoses. Determining the specific type of hematopoietic lesion produced in a mouse model and identifying mouse models that correspond to the human subtypes of these lesions has been a continuing challenge for the scientific community. The Mouse Tumor Biology Database (MTB; http://tumor.informatics.jax.org) is designed to facilitate use of mouse models of human cancer by providing detailed histopathologic and molecular information on lymphoma subtypes, including expertly annotated, on line, whole slide scans, and providing a repository for storing information on and querying these data for specific lymphoma models. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Creating Physical 3D Stereolithograph Models of Brain and Skull

    PubMed Central

    Kelley, Daniel J.; Farhoud, Mohammed; Meyerand, M. Elizabeth; Nelson, David L.; Ramirez, Lincoln F.; Dempsey, Robert J.; Wolf, Alan J.; Alexander, Andrew L.; Davidson, Richard J.

    2007-01-01

    The human brain and skull are three dimensional (3D) anatomical structures with complex surfaces. However, medical images are often two dimensional (2D) and provide incomplete visualization of structural morphology. To overcome this loss in dimension, we developed and validated a freely available, semi-automated pathway to build 3D virtual reality (VR) and hand-held, stereolithograph models. To evaluate whether surface visualization in 3D was more informative than in 2D, undergraduate students (n = 50) used the Gillespie scale to rate 3D VR and physical models of both a living patient-volunteer's brain and the skull of Phineas Gage, a historically famous railroad worker whose misfortune with a projectile tamping iron provided the first evidence of a structure-function relationship in brain. Using our processing pathway, we successfully fabricated human brain and skull replicas and validated that the stereolithograph model preserved the scale of the VR model. Based on the Gillespie ratings, students indicated that the biological utility and quality of visual information at the surface of VR and stereolithograph models were greater than the 2D images from which they were derived. The method we developed is useful to create VR and stereolithograph 3D models from medical images and can be used to model hard or soft tissue in living or preserved specimens. Compared to 2D images, VR and stereolithograph models provide an extra dimension that enhances both the quality of visual information and utility of surface visualization in neuroscience and medicine. PMID:17971879

  10. Biomedical databases: protecting privacy and promoting research.

    PubMed

    Wylie, Jean E; Mineau, Geraldine P

    2003-03-01

    When combined with medical information, large electronic databases of information that identify individuals provide superlative resources for genetic, epidemiology and other biomedical research. Such research resources increasingly need to balance the protection of privacy and confidentiality with the promotion of research. Models that do not allow the use of such individual-identifying information constrain research; models that involve commercial interests raise concerns about what type of access is acceptable. Researchers, individuals representing the public interest and those developing regulatory guidelines must be involved in an ongoing dialogue to identify practical models.

  11. A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yu; Oberweis, Andreas

    Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.

  12. A model of awareness to enhance our understanding of interprofessional collaborative care delivery and health information system design to support it.

    PubMed

    Kuziemsky, Craig E; Varpio, Lara

    2011-08-01

    As more healthcare delivery is provided by collaborative teams there is a need for enhanced design of health information systems (HISs) to support collaborative care delivery. The purpose of this study was to develop a model of the different types of awareness that exist in interprofessional collaborative care (ICC) delivery to inform HIS design to support ICC. Qualitative data collection and analysis was done. The data sources consisted of 90 h of non-participant observations and 30 interviews with nurses, physicians, medical residents, volunteers, and personal support workers. Many of the macro-level ICC activities (e.g. morning rounds, shift change) were constituted by micro-level activities that involved different types of awareness. We identified four primary types of ICC awareness: patient, team member, decision making, and environment. Each type of awareness is discussed and supported by study data. We also discuss implication of our findings for enhanced design of existing HISs as well as providing insight on how HISs could be better designed to support ICC awareness. Awareness is a complex yet crucial piece of successful ICC. The information sources that provided and supported ICC awareness were varied. The different types of awareness from the model can help us understand the explicit details of how care providers communicate and exchange information with one another. Increased understanding of ICC awareness can assist with the design and evaluation of HISs to support collaborative activities. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Climate Change Information Dashboards for Water Resource Managers

    NASA Astrophysics Data System (ADS)

    Buja, Lawrence

    2016-04-01

    It is in the context of its application that one needs to determine if climate information is of high quality and ultimately useful. Therefore, it is important that the intersection between data providers and data consumers is structured in form of an iterative and collaborative exchange where science and application viewpoints can be brought together. A traditional "loading dock"-style hand-off of data fails to optimally inform decisions. It is now broadly recognized that a collaborative, open exchange is better suited to generate credible and salient products and knowledge that can be more confidently used in decisions. But in order for this exchange to be successful in practice, it needs to be sufficiently efficient to actually facilitate an exploratory process that is inherently iterative to determine the most informative products. It also requires a transparent approach that is easily understood and communicated. We will present prototypes of Climate Information Dashboards that collect on a single page to integrate a suite of key climate information for resource managers. The content of dashboards is based on standardized products that can be assembled to meet specific needs. They were co-designed with the water resource managers and are tailored to selected management and decision topics. The visualizations are tuned to quickly provide the basic information, yet below individual diagnostics are more detailed analyses that can be consulted. These dashboards offer a flexible way to connect decision-makers to climate model output. Conversely, such dashboards can also be applied to inform model development by providing insight into a suite of key characteristics of model performance that have been identified as critical by a sector.

  14. Habitat Suitability Index Models: Eastern meadowlark

    USGS Publications Warehouse

    Schroeder, Richard L.; Sousa, Patrick J.

    1982-01-01

    Habitat preferences of the eastern meadowlark (Sturnella magna) are described in this publication, which is one of a series of Habitat Suitability Index (HSI) models. Habitat use information is presented in a synthesis of the literature on the species-habitat requirements of the eastern meadowlark, followed by the development of the HSI model. The model is presented in three formats: graphic, word, and mathematical, and is designed to provide information for use in impact assessment and habitat management activities.

  15. Habitat Suitability Index Models: Pine warbler

    USGS Publications Warehouse

    Schroeder, Richard L.

    1982-01-01

    Habitat preferences of the pine warbler (Dendroica pinus) are described in this publication, which is one of a series of Habitat Suitability Index (HSI) models. Habitat use information is presented in a synthesis of the literature on the species-habitat requirements of the pine warbler, followed by the development of the HSI model. The model is presented in three formats: graphic, word, and mathematical, and is designed to provide information for use in impact assessment and habitat management activities.

  16. Psychodrama: A Creative Approach for Addressing Parallel Process in Group Supervision

    ERIC Educational Resources Information Center

    Hinkle, Michelle Gimenez

    2008-01-01

    This article provides a model for using psychodrama to address issues of parallel process during group supervision. Information on how to utilize the specific concepts and techniques of psychodrama in relation to group supervision is discussed. A case vignette of the model is provided.

  17. Mouse Tumor Biology (MTB): a database of mouse models for human cancer.

    PubMed

    Bult, Carol J; Krupke, Debra M; Begley, Dale A; Richardson, Joel E; Neuhauser, Steven B; Sundberg, John P; Eppig, Janan T

    2015-01-01

    The Mouse Tumor Biology (MTB; http://tumor.informatics.jax.org) database is a unique online compendium of mouse models for human cancer. MTB provides online access to expertly curated information on diverse mouse models for human cancer and interfaces for searching and visualizing data associated with these models. The information in MTB is designed to facilitate the selection of strains for cancer research and is a platform for mining data on tumor development and patterns of metastases. MTB curators acquire data through manual curation of peer-reviewed scientific literature and from direct submissions by researchers. Data in MTB are also obtained from other bioinformatics resources including PathBase, the Gene Expression Omnibus and ArrayExpress. Recent enhancements to MTB improve the association between mouse models and human genes commonly mutated in a variety of cancers as identified in large-scale cancer genomics studies, provide new interfaces for exploring regions of the mouse genome associated with cancer phenotypes and incorporate data and information related to Patient-Derived Xenograft models of human cancers. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Feasibility of an anticipatory noncontact precrash restraint actuation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kercel, S.W.; Dress, W.B.

    1995-12-31

    The problem of providing an electronic warning of an impending crash to a precrash restraint system a fraction of a second before physical contact differs from more widely explored problems, such as providing several seconds of crash warning to a driver. One approach to precrash restraint sensing is to apply anticipatory system theory. This consists of nested simplified models of the system to be controlled and of the system`s environment. It requires sensory information to describe the ``current state`` of the system and the environment. The models use the sensory data to make a faster-than-real-time prediction about the near future.more » Anticipation theory is well founded but rarely used. A major problem is to extract real-time current-state information from inexpensive sensors. Providing current-state information to the nested models is the weakest element of the system. Therefore, sensors and real-time processing of sensor signals command the most attention in an assessment of system feasibility. This paper describes problem definition, potential ``showstoppers,`` and ways to overcome them. It includes experiments showing that inexpensive radar is a practical sensing element. It considers fast and inexpensive algorithms to extract information from sensor data.« less

  19. AIR QUALITY MODELING OF PM AND AIR TOXICS AT NEIGHBORHOOD SCALES

    EPA Science Inventory

    The current interest in fine particles and toxics pollutants provide an impetus for extending air quality modeling capability towards improving exposure modeling and assessments. Human exposure models require information on concentration derived from interpolation of observati...

  20. ICIS Facility Interest Subject Area Model

    EPA Pesticide Factsheets

    The Integrated Compliance Information System (ICIS) is a web-based system that provides information for the federal enforcement and compliance (FE&C) and the National Pollutant Discharge Elimination System (NPDES) programs.

  1. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  2. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  3. User Manual for SAHM package for VisTrails

    USGS Publications Warehouse

    Talbert, C.B.; Talbert, M.K.

    2012-01-01

    The Software for Assisted Habitat I\\•1odeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre-and post-processing steps and modeling options incorporated in the construction of a species distribution model. The four main advantages to using the combined VisTrail: SAHM package for species distribution modeling are: 1. formalization and tractable recording of the entire modeling process 2. easier collaboration through a common modeling framework 3. a user-friendly graphical interface to manage file input, model runs, and output 4. extensibility to incorporate future and additional modeling routines and tools. This user manual provides detailed information on each module within the SAHM package, their input, output, common connections, optional arguments, and default settings. This information can also be accessed for individual modules by right clicking on the documentation button for any module in VisTrail or by right clicking on any input or output for a module and selecting view documentation. This user manual is intended to accompany the user guide which provides detailed instructions on how to install the SAHM package within VisTrails and then presents information on the use of the package.

  4. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  5. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  6. Development of an integrated medical supply information system

    NASA Astrophysics Data System (ADS)

    Xu, Eric; Wermus, Marek; Blythe Bauman, Deborah

    2011-08-01

    The integrated medical supply inventory control system introduced in this study is a hybrid system that is shaped by the nature of medical supply, usage and storage capacity limitations of health care facilities. The system links demand, service provided at the clinic, health care service provider's information, inventory storage data and decision support tools into an integrated information system. ABC analysis method, economic order quantity model, two-bin method and safety stock concept are applied as decision support models to tackle inventory management issues at health care facilities. In the decision support module, each medical item and storage location has been scrutinised to determine the best-fit inventory control policy. The pilot case study demonstrates that the integrated medical supply information system holds several advantages for inventory managers, since it entails benefits of deploying enterprise information systems to manage medical supply and better patient services.

  7. On the magnetic circular dichroism of benzene. A density-functional study

    NASA Astrophysics Data System (ADS)

    Kaminský, Jakub; Kříž, Jan; Bouř, Petr

    2017-04-01

    Spectroscopy of magnetic circular dichroism (MCD) provides enhanced information on molecular structure and a more reliable assignment of spectral bands than absorption alone. Theoretical modeling can significantly enhance the information obtained from experimental spectra. In the present study, the time dependent density functional theory is employed to model the lowest-energy benzene transitions, in particular to investigate the role of the Rydberg states and vibrational interference in spectral intensities. The effect of solvent is explored on model benzene-methane clusters. For the lowest-energy excitation, the vibrational sub-structure of absorption and MCD spectra is modeled within the harmonic approximation, providing a very good agreement with the experiment. The simulations demonstrate that the Rydberg states have a much stronger effect on the MCD intensities than on the absorption, and a very diffuse basis set must be used to obtain reliable results. The modeling also indicates that the Rydberg-like states and associated transitions may persist in solutions. Continuum-like solvent models are thus not suitable for their modeling; solvent-solute clusters appear to be more appropriate, providing they are large enough.

  8. Medical Device Guidebook: A browser information resource for medical device users.

    PubMed

    Clarkson, Douglas M

    2017-03-01

    A web based information resource - the 'Medical Device Guidebook' - for the enabling of safe use of medical devices is described. Medical devices are described within a 'catalogue' of specific models and information on a specific model is provided within a consistent set of information 'keys'. These include 'user manuals', 'points of caution', 'clinical use framework', 'training/assessment material', 'frequently asked questions', 'authorised user comments' and 'consumables'. The system allows identification of known risk/hazards associated with specific devices, triggered, for example, by national alerts or locally raised safety observations. This provides a mechanism for more effective briefing of equipment users on the associated hazards of equipment. A feature of the system is the inclusion of a specific 'Operational Procedure' for each device, where the lack of this focus is shown in the literature to often be a key factor in equipment misuse and associated patient injury. The 'Guidebook' provides a mechanism for the development of an information resource developed within local clinical networks and encourages a consistent approach to medical device use. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Modeling Magnetic Flux-Ropes Structures

    NASA Astrophysics Data System (ADS)

    Nieves-Chinchilla, T.; Linton, M.; Hidalgo, M. A. U.; Vourlidas, A.; Savani, N.; Szabo, A.; Farrugia, C. J.; Yu, W.

    2015-12-01

    Flux-ropes are usually associated with magnetic structures embedded in the interplanetary Coronal Mass Ejections (ICMEs) with a depressed proton temperature (called Magnetic Clouds, MCs). However, small-scale flux-ropes in the solar wind are also identified with different formation, evolution, and dynamic involved. We present an analytical model to describe magnetic flux-rope topologies. The model is generalized to different grades of complexity. It extends the circular-cylindrical concept of Hidalgo et al. (2002) by introducing a general form for the radial dependence of the current density. This generalization provides information on the force distribution inside the flux rope in addition to the usual parameters of flux-rope geometrical information and orientation. The generalized model provides flexibility for implementation in 3-D MHD simulations.

  10. School District Information Study. Planning for the Management of Information through Technology. A Planning and Staff Development Project.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Div. of Elementary and Secondary Education Planning.

    Designed to provide assistance to school district personnel who seek to develop a plan for information management and related applications of technology, this guide presents the School District Information Study (SDIS) model for the review of management policies, procedures, and activities related to information processing done by school district…

  11. Alliance Building in the Information and Online Database Industry.

    ERIC Educational Resources Information Center

    Alexander, Johanna Olson

    2001-01-01

    Presents an analysis of information industry alliance formation using environmental scanning methods. Highlights include why libraries and academic institutions should be interested; a literature review; historical context; industry and market structures; commercial and academic models; trends; and implications for information providers,…

  12. Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlicher, Bob G; Abercrombie, Robert K

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  13. ID201202961, DOE S-124,539, Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  14. Modeling Virus Coinfection to Inform Management of Maize Lethal Necrosis in Kenya

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilker, Frank M.; Allen, Linda J. S.; Bokil, Vrushali A.

    Maize lethal necrosis (MLN) has emerged as a serious threat to food security in sub-Saharan Africa. MLN is caused by coinfection with two viruses, Maize chlorotic mottle virus and a potyvirus, often Sugarcane mosaic virus. To better understand the dynamics of MLN and to provide insight into disease management, we modeled the spread of the viruses causing MLN within and between growing seasons. The model allows for transmission via vectors, soil, and seed, as well as exogenous sources of infection. Following model parameterization, we predict how management affects disease prevalence and crop performance over multiple seasons. Resource-rich farmers with largemore » holdings can achieve good control by combining clean seed and insect control. However, crop rotation is often required to effect full control. Resource-poor farmers with smaller holdings must rely on rotation and roguing, and achieve more limited control. For both types of farmer, unless management is synchronized over large areas, exogenous sources of infection can thwart control. As well as providing practical guidance, our modeling framework is potentially informative for other cropping systems in which coinfection has devastating effects. Finally, our work also emphasizes how mathematical modeling can inform management of an emerging disease even when epidemiological information remains scanty.« less

  15. Modeling Virus Coinfection to Inform Management of Maize Lethal Necrosis in Kenya

    DOE PAGES

    Hilker, Frank M.; Allen, Linda J. S.; Bokil, Vrushali A.; ...

    2017-08-01

    Maize lethal necrosis (MLN) has emerged as a serious threat to food security in sub-Saharan Africa. MLN is caused by coinfection with two viruses, Maize chlorotic mottle virus and a potyvirus, often Sugarcane mosaic virus. To better understand the dynamics of MLN and to provide insight into disease management, we modeled the spread of the viruses causing MLN within and between growing seasons. The model allows for transmission via vectors, soil, and seed, as well as exogenous sources of infection. Following model parameterization, we predict how management affects disease prevalence and crop performance over multiple seasons. Resource-rich farmers with largemore » holdings can achieve good control by combining clean seed and insect control. However, crop rotation is often required to effect full control. Resource-poor farmers with smaller holdings must rely on rotation and roguing, and achieve more limited control. For both types of farmer, unless management is synchronized over large areas, exogenous sources of infection can thwart control. As well as providing practical guidance, our modeling framework is potentially informative for other cropping systems in which coinfection has devastating effects. Finally, our work also emphasizes how mathematical modeling can inform management of an emerging disease even when epidemiological information remains scanty.« less

  16. Practical lessons in remote connectivity.

    PubMed Central

    Kouroubali, A.; Starren, J.; Barrows, R. C.; Clayton, P. D.

    1997-01-01

    Community Health Information Networks (CHINs) require the ability to provide computer network connections to many remote sites. During the implementation of the Washington Heights and Inwood Community Health Management Information System (WHICHIS) at the Columbia-Presbyterian Medical Center (CPMC), a number of remote connectivity issues have been encountered. Both technical and non-technical issues were significant during the installation. We developed a work-flow model for this process which may be helpful to any health care institution attempting to provide seamless remote connectivity. This model is presented and implementation lessons are discussed. PMID:9357643

  17. Inventory of environmental impact models related to energy technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, P.T.; Dailey, N.S.; Johnson, C.A.

    The purpose of this inventory is to identify and collect data on computer simulations and computational models related to the environmental effects of energy source development, energy conversion, or energy utilization. Information for 33 data fields was sought for each model reported. All of the information which could be obtained within the time alloted for completion of the project is presented for each model listed. Efforts will be continued toward acquiring the needed information. Readers who are interested in these particular models are invited to contact ESIC for assistance in locating them. In addition to the standard bibliographic information, othermore » data fields of interest to modelers, such as computer hardware and software requirements, algorithms, applications, and existing model validation information, are included. Indexes are provided for contact person, acronym, keyword, and title. The models are grouped into the following categories: atmospheric transport, air quality, aquatic transport, terrestrial food chains, soil transport, aquatic food chains, water quality, dosimetry, and human effects, animal effects, plant effects, and generalized environmental transport. Within these categories, the models are arranged alphabetically by last name of the contact person.« less

  18. ICIS Perm Storm Water Subject Area Model

    EPA Pesticide Factsheets

    The Integrated Compliance Information System (ICIS) is a web-based system that provides information for the federal enforcement and compliance (FE&C) and the National Pollutant Discharge Elimination System (NPDES) programs.

  19. The Holographic Electron Density Theorem, de-quantization, re-quantization, and nuclear charge space extrapolations of the Universal Molecule Model

    NASA Astrophysics Data System (ADS)

    Mezey, Paul G.

    2017-11-01

    Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.

  20. One decade of the Data Fusion Information Group (DFIG) model

    NASA Astrophysics Data System (ADS)

    Blasch, Erik

    2015-05-01

    The revision of the Joint Directors of the Laboratories (JDL) Information Fusion model in 2004 discussed information processing, incorporated the analyst, and was coined the Data Fusion Information Group (DFIG) model. Since that time, developments in information technology (e.g., cloud computing, applications, and multimedia) have altered the role of the analyst. Data production has outpaced the analyst; however the analyst still has the role of data refinement and information reporting. In this paper, we highlight three examples being addressed by the DFIG model. One example is the role of the analyst to provide semantic queries (through an ontology) so that vast amount of data available can be indexed, accessed, retrieved, and processed. The second idea is reporting which requires the analyst to collect the data into a condensed and meaningful form through information management. The last example is the interpretation of the resolved information from data that must include contextual information not inherent in the data itself. Through a literature review, the DFIG developments in the last decade demonstrate the usability of the DFIG model to bring together the user (analyst or operator) and the machine (information fusion or manager) in a systems design.

  1. PESTAN: Pesticide Analytical Model Version 4.0 User's Guide

    EPA Pesticide Factsheets

    The principal objective of this User's Guide to provide essential information on the aspects such as model conceptualization, model theory, assumptions and limitations, determination of input parameters, analysis of results and sensitivity analysis.

  2. Cognitive Diagnostic Models for Tests with Multiple-Choice and Constructed-Response Items

    ERIC Educational Resources Information Center

    Kuo, Bor-Chen; Chen, Chun-Hua; Yang, Chih-Wei; Mok, Magdalena Mo Ching

    2016-01-01

    Traditionally, teachers evaluate students' abilities via their total test scores. Recently, cognitive diagnostic models (CDMs) have begun to provide information about the presence or absence of students' skills or misconceptions. Nevertheless, CDMs are typically applied to tests with multiple-choice (MC) items, which provide less diagnostic…

  3. An Analysis of Academic Research Libraries Assessment Data: A Look at Professional Models and Benchmarking Data

    ERIC Educational Resources Information Center

    Lewin, Heather S.; Passonneau, Sarah M.

    2012-01-01

    This research provides the first review of publicly available assessment information found on Association of Research Libraries (ARL) members' websites. After providing an overarching review of benchmarking assessment data, and of professionally recommended assessment models, this paper examines if libraries contextualized their assessment…

  4. Jobs and Economic Development Impacts (Postcard)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2011-08-01

    The U.S. Department of Energy's Wind Powering America initiative provides information on the Jobs and Economic Development Benefits model. This postcard is a marketing piece that stakeholders can provide to interested parties; it will guide them to the Jobs and Economic Development Benefits model section on the Wind Powering America website.

  5. Using information communication technology in models of integrated community-based primary health care: learning from the iCOACH case studies.

    PubMed

    Steele Gray, Carolyn; Barnsley, Jan; Gagnon, Dominique; Belzile, Louise; Kenealy, Tim; Shaw, James; Sheridan, Nicolette; Wankah Nji, Paul; Wodchis, Walter P

    2018-06-26

    Information communication technology (ICT) is a critical enabler of integrated models of community-based primary health care; however, little is known about how existing technologies have been used to support new models of integrated care. To address this gap, we draw on data from an international study of integrated models, exploring how ICT is used to support activities of integrated care and the organizational and environmental barriers and enablers to its adoption. We take an embedded comparative multiple-case study approach using data from a study of implementation of nine models of integrated community-based primary health care, the Implementing Integrated Care for Older Adults with Complex Health Needs (iCOACH) study. Six cases from Canada, three each in Ontario and Quebec, and three in New Zealand, were studied. As part of the case studies, interviews were conducted with managers and front-line health care providers from February 2015 to March 2017. A qualitative descriptive approach was used to code data from 137 interviews and generate word tables to guide analysis. Despite different models and contexts, we found strikingly similar accounts of the types of activities supported through ICT systems in each of the cases. ICT systems were used most frequently to support activities like care coordination by inter-professional teams through information sharing. However, providers were limited in their ability to efficiently share patient data due to data access issues across organizational and professional boundaries and due to system functionality limitations, such as a lack of interoperability. Even in innovative models of care, managers and providers in our cases mainly use technology to enable traditional ways of working. Technology limitations prevent more innovative uses of technology that could support disruption necessary to improve care delivery. We argue the barriers to more innovative use of technology are linked to three factors: (1) information access barriers, (2) limited functionality of available technology, and (3) organizational and provider inertia.

  6. Rational clinical evaluation of suspected acute coronary syndromes: The value of more information.

    PubMed

    Hancock, David G; Chuang, Ming-Yu Anthony; Bystrom, Rebecca; Halabi, Amera; Jones, Rachel; Horsfall, Matthew; Cullen, Louise; Parsonage, William A; Chew, Derek P

    2017-12-01

    Many meta-analyses have provided synthesised likelihood ratio data to aid clinical decision-making. However, much less has been published on how to safely combine clinical information in practice. We aimed to explore the benefits and risks of pooling clinical information during the ED assessment of suspected acute coronary syndrome. Clinical information on 1776 patients was collected within a randomised trial conducted across five South Australian EDs between July 2011 and March 2013. Bayes theorem was used to calculate patient-specific post-test probabilities using age- and gender-specific pre-test probabilities and likelihood ratios corresponding to the presence or absence of 18 clinical factors. Model performance was assessed as the presence of adverse cardiac outcomes among patients theoretically discharged at a post-test probability less than 1%. Bayes theorem-based models containing high-sensitivity troponin T (hs-troponin) outperformed models excluding hs-troponin, as well as models utilising TIMI and GRACE scores. In models containing hs-troponin, a plateau in improving discharge safety was observed after the inclusion of four clinical factors. Models with fewer clinical factors better approximated the true event rate, tended to be safer and resulted in a smaller standard deviation in post-test probability estimates. We showed that there is a definable point where additional information becomes uninformative and may actually lead to less certainty. This evidence supports the concept that clinical decision-making in the assessment of suspected acute coronary syndrome should be focused on obtaining the least amount of information that provides the highest benefit for informing the decisions of admission or discharge. © 2017 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  7. A medical digital library to support scenario and user-tailored information retrieval.

    PubMed

    Chu, W W; Johnson, D B; Kangarloo, H

    2000-06-01

    Current large-scale information sources are designed to support general queries and lack the ability to support scenario-specific information navigation, gathering, and presentation. As a result, users are often unable to obtain desired specific information within a well-defined subject area. Today's information systems do not provide efficient content navigation, incremental appropriate matching, or content correlation. We are developing the following innovative technologies to remedy these problems: 1) scenario-based proxies, enabling the gathering and filtering of information customized for users within a pre-defined domain; 2) context-sensitive navigation and matching, providing approximate matching and similarity links when an exact match to a user's request is unavailable; 3) content correlation of documents, creating semantic links between documents and information sources; and 4) user models for customizing retrieved information and result presentation. A digital medical library is currently being constructed using these technologies to provide customized information for the user. The technologies are general in nature and can provide custom and scenario-specific information in many other domains (e.g., crisis management).

  8. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, Ming‐shu; Whittemore, Donald O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  9. Enhanced semantic interoperability by profiling health informatics standards.

    PubMed

    López, Diego M; Blobel, Bernd

    2009-01-01

    Several standards applied to the healthcare domain support semantic interoperability. These standards are far from being completely adopted in health information system development, however. The objective of this paper is to provide a method and suggest the necessary tooling for reusing standard health information models, by that way supporting the development of semantically interoperable systems and components. The approach is based on the definition of UML Profiles. UML profiling is a formal modeling mechanism to specialize reference meta-models in such a way that it is possible to adapt those meta-models to specific platforms or domains. A health information model can be considered as such a meta-model. The first step of the introduced method identifies the standard health information models and tasks in the software development process in which healthcare information models can be reused. Then, the selected information model is formalized as a UML Profile. That Profile is finally applied to system models, annotating them with the semantics of the information model. The approach is supported on Eclipse-based UML modeling tools. The method is integrated into a comprehensive framework for health information systems development, and the feasibility of the approach is demonstrated in the analysis, design, and implementation of a public health surveillance system, reusing HL7 RIM and DIMs specifications. The paper describes a method and the necessary tooling for reusing standard healthcare information models. UML offers several advantages such as tooling support, graphical notation, exchangeability, extensibility, semi-automatic code generation, etc. The approach presented is also applicable for harmonizing different standard specifications.

  10. Modeling Evacuation of a Hospital without Electric Power.

    PubMed

    Vugrin, Eric D; Verzi, Stephen J; Finley, Patrick D; Turnquist, Mark A; Griffin, Anne R; Ricci, Karen A; Wyte-Lake, Tamar

    2015-06-01

    Hospital evacuations that occur during, or as a result of, infrastructure outages are complicated and demanding. Loss of infrastructure services can initiate a chain of events with corresponding management challenges. This report describes a modeling case study of the 2001 evacuation of the Memorial Hermann Hospital in Houston, Texas (USA). The study uses a model designed to track such cascading events following loss of infrastructure services and to identify the staff, resources, and operational adaptations required to sustain patient care and/or conduct an evacuation. The model is based on the assumption that a hospital's primary mission is to provide necessary medical care to all of its patients, even when critical infrastructure services to the hospital and surrounding areas are disrupted. Model logic evaluates the hospital's ability to provide an adequate level of care for all of its patients throughout a period of disruption. If hospital resources are insufficient to provide such care, the model recommends an evacuation. Model features also provide information to support evacuation and resource allocation decisions for optimizing care over the entire population of patients. This report documents the application of the model to a scenario designed to resemble the 2001 evacuation of the Memorial Hermann Hospital, demonstrating the model's ability to recreate the timeline of an actual evacuation. The model is also applied to scenarios demonstrating how its output can inform evacuation planning activities and timing.

  11. Multiagent intelligent systems

    NASA Astrophysics Data System (ADS)

    Krause, Lee S.; Dean, Christopher; Lehman, Lynn A.

    2003-09-01

    This paper will discuss a simulation approach based upon a family of agent-based models. As the demands placed upon simulation technology by such applications as Effects Based Operations (EBO), evaluations of indicators and warnings surrounding homeland defense and commercial demands such financial risk management current single thread based simulations will continue to show serious deficiencies. The types of "what if" analysis required to support these types of applications, demand rapidly re-configurable approaches capable of aggregating large models incorporating multiple viewpoints. The use of agent technology promises to provide a broad spectrum of models incorporating differing viewpoints through a synthesis of a collection of models. Each model would provide estimates to the overall scenario based upon their particular measure or aspect. An agent framework, denoted as the "family" would provide a common ontology in support of differing aspects of the scenario. This approach permits the future of modeling to change from viewing the problem as a single thread simulation, to take into account multiple viewpoints from different models. Even as models are updated or replaced the agent approach permits rapid inclusion in new or modified simulations. In this approach a variety of low and high-resolution information and its synthesis requires a family of models. Each agent "publishes" its support for a given measure and each model provides their own estimates on the scenario based upon their particular measure or aspect. If more than one agent provides the same measure (e.g. cognitive) then the results from these agents are combined to form an aggregate measure response. The objective would be to inform and help calibrate a qualitative model, rather than merely to present highly aggregated statistical information. As each result is processed, the next action can then be determined. This is done by a top-level decision system that communicates to the family at the ontology level without any specific understanding of the processes (or model) behind each agent. The increasingly complex demands upon simulation for the necessity to incorporate the breadth and depth of influencing factors makes a family of agent based models a promising solution. This paper will discuss that solution with syntax and semantics necessary to support the approach.

  12. Investigating different approaches to develop informative priors in hierarchical Bayesian safety performance functions.

    PubMed

    Yu, Rongjie; Abdel-Aty, Mohamed

    2013-07-01

    The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. CIS Program Redesign Driven by IS2010 Model: A Case Study

    ERIC Educational Resources Information Center

    Surendran, Ken; Amer, Suhair; Schwieger, Dana

    2012-01-01

    The release of the IS2010 Model Curriculum has triggered review of existing Information Systems (IS) programs. It also provides an opportunity to replace low enrollment IS programs with flexible ones that focus on specific application domains. In this paper, the authors present a case study of their redesigned Computer Information Systems (CIS)…

  14. Model-based learning and the contribution of the orbitofrontal cortex to the model-free world

    PubMed Central

    McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey

    2012-01-01

    Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030

  15. 12 CFR Appendix E to Part 225 - Capital Adequacy Guidelines for Bank Holding Companies: Market Risk Measure

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL... Stress tests provide information about the impact of adverse market events on a bank's covered positions. Backtests provide information about the accuracy of an internal model by comparing an organization's daily...

  16. 12 CFR Appendix E to Part 208 - Capital Adequacy Guidelines for State Member Banks; Market Risk Measure

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... provide information about the impact of adverse market events on a bank's covered positions. Backtests provide information about the accuracy of an internal model by comparing a bank's daily VAR measures to... Banks; Market Risk Measure E Appendix E to Part 208 Banks and Banking FEDERAL RESERVE SYSTEM BOARD OF...

  17. Will They Use What You Taught Them? Course-Embedded Assessment of Accounting Students' Information Technology Self-Efficacy

    ERIC Educational Resources Information Center

    Moore, John W.; Mitchem, Cheryl E.

    2004-01-01

    This paper provides a model of course-embedded assessment for use in an undergraduate Accounting Information Systems course, and reports the results obtained from implementation. The profession's educational objectives are mapped to specific computer skills and assignments, to provide direct evidence of learning outcomes. Indirect evidence of…

  18. Soldier Dimensions in Combat Models

    DTIC Science & Technology

    1990-05-07

    and performance. Questionnaires, SQTs, and ARTEPs were often used. Many scales had estimates of reliability but few had validity data. Most studies...pending its validation . Research plans were provided for applications in simulated combat and with simulation devices, for data previously gathered...regarding reliability and validity . Lack of information following an instrument indicates neither reliability nor validity information was provided by the

  19. A qualitative systematic review of internal and external influences on shared decision-making in all health care settings.

    PubMed

    Truglio-Londrigan, Marie; Slyer, Jason T; Singleton, Joanne K; Worral, Priscilla

    The objective of this review is to identify and synthesize the best available evidence related to the meaningfulness of internal and external influences on shared-decision making for adult patients and health care providers in all health care settings.The specific questions to be answered are: BACKGROUND: Patient-centered care is emphasized in today's healthcare arena. This emphasis is seen in the works of the International Alliance of Patients' Organizations (IAOP) who describe patient-centered healthcare as care that is aimed at addressing the needs and preferences of patients. The IAOP presents five principles which are foundational to the achievement of patient-centered healthcare: respect, choice, policy, access and support, as well as information. These five principles are further described as:Within the description of these five principles the idea of shared decision-making is clearly evident.The concept of shared decision-making began to appear in the literature in the 1990s. It is defined as a "process jointly shared by patients and their health care provider. It aims at helping patients play an active role in decisions concerning their health, which is the ultimate goal of patient-centered care." The details of the shared decision-making process are complex and consist of a series of steps including:Three overall representative decision-making models are noted in contemporary literature. These three models include: paternalistic, informed decision-making, and shared decision-making. The paternalistic model is an autocratic style of decision-making where the healthcare provider carries out the care from the perspective of knowing what is best for the patient and therefore makes all decisions. The informed decision-making model takes place as the information needed to make decisions is conveyed to the patient and the patient makes the decisions without the healthcare provider involvement. Finally, the shared decision-making model is representative of a sharing and a negotiation towards treatment decisions. Thus, these models represent a range with patient non-participation at one end of the continuum to informed decision making or a high level of patient power at the other end. Several shared decision-making models focus on the process of shared decision-making previously noted. A discussion of several process models follows below.Charles et al. depicts a process model of shared decision-making that identifies key characteristics that must be in evidence. The patient shares in the responsibility with the healthcare provider in this model. The key characteristics included:This model illustrates that there must be at least two individuals participating, however, family and friends may be involved in a variety of roles such as the collector of information, the interpreter of this information, coach, advisor, negotiator, and caretaker. This model also depicts the need to take steps to participate in the shared decision-making process. To take steps means that there is an agreement between and among all involved that shared decision-making is necessary and preferred. Research about patient preferences, however, offers divergent views. The link between patient preferences for shared decision-making and the actuality of shared decision-making in practice is not strong. Research concerning patients and patient preferences on shared decision-making points to variations depending on age, education, socio-economic status, culture, and diagnosis. Healthcare providers may also hold preferences for shared decision-making; however, research in this area is not as comprehensive as is patient focused research. Elwyn et al. explored the views of general practice providers on involving patients in decisions. Both positive and negative views were identified ranging from receptive, noting potential benefits, to concern for the unrealistic nature of participation and sharing in the decision-making process. An example of this potential difficulty, from a healthcare provider perspective, is identifying the potential conflict that may develop when a patient's preference is different from clinical practice guidelines. This is further exemplified in healthcare encounters when a situation may not yield itself to a clear answer but rather lies in a grey area. These situations are challenging for healthcare providers.The notion of information sharing as a prerequisite to shared decision-making offers insight into another process. The healthcare provider must provide the patient the information that they need to know and understand in order to even consider and participate in the shared decision-making process. This information may include the disease, potential treatments, consequences of those treatments, and any alternatives, which may include the decision to do nothing. Without knowing this information the patient will not be able to participate in the shared decision-making process. The complexity of this step is realized if one considers what the healthcare provider needs to know in order to first assess what the patient knows and does not know, the readiness of the patient to participate in this educational process and learn the information, as well as, the individual learning styles of the patient taking into consideration the patient's ideas, values, beliefs, education, culture, literacy, and age. Depending on the results of this assessment the health care provider then must communicate the information to the patient. This is also a complex process that must take into consideration the relationship, comfort level, and trust between the healthcare provider and the patient.Finally, the treatment decision is reached between both the healthcare provider and the patient. Charles et al. portrays shared decision-making as a process with the end product, the shared decision, as the outcome. This outcome may be a decision as to the agreement of a treatment decision, no agreement reached as to a treatment decision, and disagreement as to a treatment decision. Negotiation is a part of the process as the "test of a shared decision (as distinct from the decision-making process) is if both parties agree on the treatment option."Towle and Godolphin developed a process model that further exemplifies the role of the healthcare provider and the patient in the shared decision-making process as mutual partners with mutual responsibilities. The capacity to engage in this shared decision-making rests, therefore, on competencies including knowledge, skills, and abilities for both the healthcare provider and the patient. This mutual partnership and the corresponding competencies are presented for both the healthcare provider and the patient in this model. The competencies noted for the healthcare provider for shared decision making include:Patient competencies include:This model illustrates the shared decision-making process with emphasis on the role of the healthcare provider and the patient very similar to the prior model. This model, however, gives greater emphasis to the process of the co-participation of the healthcare provider and the patient. The co-participation depicts a mutual partnership with mutual responsibilities that can be seen as "reciprocal relationships of dialogue." For this to take place the relationship between and among the participants of the shared decision-making process is important along with other internal and external influences such as communication, trust, mutual respect, honesty, time, continuity, and commitment. Cultural, social, and age group differences; evidence; and team and family are considered within this model.Elwyn et al. presents yet another model that depicts the shared decision-making process; however, this model offers a view where the healthcare provider holds greater responsibility in this process. In this particular model the process focuses on the healthcare provider and the essential skills needed to engage the patient in shard decisions. The competencies outlined in this model include:The healthcare provider must demonstrate knowledge, competencies, and skills as a communicator. The skills for communication competency require the healthcare provider to be able to elicit the patient's thoughts and input regarding treatment management throughout the consultation. The healthcare provider must also demonstrate competencies in assessment skills beyond physical assessment that includes the ability to assess the patient's perceptions and readiness to participate. In addition, the healthcare provider must be able to assess the patient's readiness to learn the information that the patient needs to know in order to fully engage in the shared decision-making process, assess what the patient already knows, what the patient does not know, and whether or not the information that the patient knows is accurate. Once this assessment is completed the healthcare provider then must draw on his/her knowledge, competencies, and skills necessary to teach the patient what the patient needs to know to be informed. This facilitates the notion of the tailor-made information noted previously. The healthcare provider also requires competencies in how to check and evaluate the entire process to ensure that the patient does understand and accept with comfort not only the plan being negotiated but the entire process of sharing in decision-making. In addition to the above, there are further competencies such as competence in working with groups and teams, competencies in terms of cultural knowledge, competencies with regard to negotiation skills, as well as, competencies when faced with ethical challenges.Shared decision-making has been associated with autonomy, empowerment, and effectiveness and efficiency. Both patients and health care providers have noted improvement in relationships and improved interactions when shared decision-making is in evidence. Along with this improved relationship and interaction enhanced compliance is noted. Additional research points to patient satisfaction and enhanced quality of life. There is some evidence to suggest that shared decision-making does facilitate positive health outcomes.In today's healthcare environment there is greater emphasis on patient-centered care that exemplifies patient engagement, participation, partnership, and shared decision-making. Given the shift from the more autocratic delivery of care to the shared approach there is a need to more fully understand the what of shared decision-making as well as how shared decision-making takes place along with what internal and external influences may encourage, support, and facilitate the shared decision-making process. These influences are intervening variables that may be of significance for the successful development of practice-based strategies that may foster shared decision-making in practice. The purpose of this qualitative systematic review is to identify internal and external influences on shared decision-making in all health care settings.A preliminary search of the Joanna Briggs Library of Systematic Reviews, MEDLINE, CINAHL, and PROSPERO did not identify any previously conducted qualitative systematic reviews on the meaningfulness of internal and external influences on shared decision-making.

  20. Symposium Issue on the Energy Information Administration.

    ERIC Educational Resources Information Center

    Kent, Calvin A.; And Others

    1993-01-01

    Describes the Energy Information Administration (EIA), a statistical agency which provides credible, timely, and useful energy information for decision makers in all sectors of society. The 10 articles included in the volume cover survey design, data collection, data integration, data analysis, modeling and forecasting, confidentiality, and…

  1. Attachment in Middle Childhood: Associations with Information Processing

    ERIC Educational Resources Information Center

    Zimmermann, Peter; Iwanski, Alexandra

    2015-01-01

    Attachment theory suggests that internal working models of self and significant others influence adjustment during development by controlling information processing and self-regulation. We provide a conceptual overview on possible mechanisms linking attachment and information processing and review the current literature in middle childhood.…

  2. Digital Elevation Models

    USGS Publications Warehouse

    ,

    1993-01-01

    The Earth Science Information Center (ESIC) distributes digital cartographic/geographic data files produced by the U.S. Geological Survey (USGS) as part of the National Mapping Program. Digital cartographic data files may be grouped into four basic types. The first of these, called a Digital Line Graph (DLG), is the line map information in digital form. These data files include information on base data categories, such as transportation, hypsography, hydrography, and boundaries. The second type, called a Digital Elevation Model (DEM), consists of a sampled array of elevations for a number of ground positions at regularly spaced intervals. The third type is Land Use and Land Cover digital data which provides information on nine major classes of land use such as urban, agricultural, or forest as well as associated map data such as political units and Federal land ownership. The fourth type, the Geographic Names Information System, provides primary information for all known places, features, and areas in the United States identified by a proper name.

  3. Practitioner Perspectives on a Disaster Management Architecture

    NASA Astrophysics Data System (ADS)

    Moe, K.; Evans, J. D.

    2012-12-01

    The Committee on Earth Observing Satellites (CEOS) Working Group on Information Systems and Services (WGISS) is constructing a high-level reference model for the use of satellites, sensors, models, and associated data products from many different global data and service providers in disaster response and risk assessment. To help streamline broad, effective access to satellite information, the reference model provides structured, shared, holistic views of distributed systems and services - in effect, a common vocabulary describing the system-of-systems building blocks and how they are composed for disaster management. These views are being inferred from real-world experience, by documenting and analyzing how practitioners have gone about using or providing satellite data to manage real disaster events or to assess or mitigate hazard risks. Crucial findings and insights come from case studies of three kinds of experience: - Disaster response and recovery (such as the 2008 Sichuan/Wenchuan earthquake in China; and the 2011 Tohoku earthquake and tsunami in Japan); - Technology pilot projects (such as NASA's Flood Sensor Web pilot in Namibia, or the interagency Virtual Mission Operation Center); - Information brokers (such as the International Charter: Space and Major Disasters, or the U.K.-based Disaster Management Constellation). Each of these experiences sheds light on the scope and stakeholders of disaster management; the information requirements for various disaster types and phases; and the services needed for effective access to information by a variety of users. They also highlight needs and gaps in the supply of satellite information for disaster management. One need stands out: rapid and effective access to complex data from multiple sources, across inter-organizational boundaries. This is the near-real-time challenge writ large: gaining access to satellite data resources from multiple organizationally distant and geographically disperse sources, to meet an urgent need. The case studies and reference model will highlight gaps in data supply and data delivery technologies, and suggest recommended priorities for satellite missions, ground data systems, and third-party service providers.

  4. Building climate adaptation capabilities through technology and community

    NASA Astrophysics Data System (ADS)

    Murray, D.; McWhirter, J.; Intsiful, J. D.; Cozzini, S.

    2011-12-01

    To effectively plan for adaptation to changes in climate, decision makers require infrastructure and tools that will provide them with timely access to current and future climate information. For example, climate scientists and operational forecasters need to access global and regional model projections and current climate information that they can use to prepare monitoring products and reports and then publish these for the decision makers. Through the UNDP African Adaption Programme, an infrastructure is being built across Africa that will provide multi-tiered access to such information. Web accessible servers running RAMADDA, an open source content management system for geoscience information, will provide access to the information at many levels: from the raw and processed climate model output to real-time climate conditions and predictions to documents and presentation for government officials. Output from regional climate models (e.g. RegCM4) and downscaled global climate models will be accessible through RAMADDA. The Integrated Data Viewer (IDV) is being used by scientists to create visualizations that assist the understanding of climate processes and projections, using the data on these as well as external servers. Since RAMADDA is more than a data server, it is also being used as a publishing platform for the generated material that will be available and searchable by the decision makers. Users can wade through the enormous volumes of information and extract subsets for their region or project of interest. Participants from 20 countries attended workshops at ICTP during 2011. They received training on setting up and installing the servers and necessary software and are now working on deploying the systems in their respective countries. This is the first time an integrated and comprehensive approach to climate change adaptation has been widely applied in Africa. It is expected that this infrastructure will enhance North-South collaboration and improve the delivery of technical support and services. This improved infrastructure will enhance the capacity of countries to provide a wide range of robust products and services in a timely manner.

  5. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    NASA Astrophysics Data System (ADS)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea

    2017-10-01

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.

  6. Contracting private sector providers for public sector health services in Jalisco, Mexico: perspectives of system actors

    PubMed Central

    Nigenda, Gustavo H; González, Luz María

    2009-01-01

    Introduction Contracting out health services is a strategy that many health systems in the developing world are following, despite the lack of decisive evidence that this is the best way to improve quality, increase efficiency and expand coverage. A large body of literature has appeared in recent years focusing on the results of several contracting strategies, but very few papers have addressed aspects of the managerial process and how this can affect results. Case description This paper describes and analyses the perceptions and opinions of managers and workers about the benefits and challenges of the contracting model that has been in place for almost 10 years in the State of Jalisco, Mexico. Both qualitative and quantitative information was collected. An open-ended questionnaire was used to obtain information from a group of managers, while information provided by a self-selected group of workers was collected via a closed-ended questionnaire. The analysis contrasted the information obtained from each source. Discussion and Evaluation Findings show that perceptions of managers and workers vary for most of the items studied. For managers the model has been a success, as it has allowed for expansion of coverage based on a cost-effective strategy, while for workers the model also possesses positive elements but fails to provide fair labour relationships, which negatively affects their performance. Conclusion Perspectives of the two main groups of actors in Jalisco's contracting model are important in the design and adjustment of an adequate contracting model that includes managerial elements to give incentives to worker performance, a key element necessary to achieve the model's ultimate objectives. Lessons learnt from this study could be relevant for the experience of contracting models in other developing countries. PMID:19849831

  7. A model for the electronic support of practice-based research networks.

    PubMed

    Peterson, Kevin A; Delaney, Brendan C; Arvanitis, Theodoros N; Taweel, Adel; Sandberg, Elisabeth A; Speedie, Stuart; Richard Hobbs, F D

    2012-01-01

    The principal goal of the electronic Primary Care Research Network (ePCRN) is to enable the development of an electronic infrastructure to support clinical research activities in primary care practice-based research networks (PBRNs). We describe the model that the ePCRN developed to enhance the growth and to expand the reach of PBRN research. Use cases and activity diagrams were developed from interviews with key informants from 11 PBRNs from the United States and United Kingdom. Discrete functions were identified and aggregated into logical components. Interaction diagrams were created, and an overall composite diagram was constructed describing the proposed software behavior. Software for each component was written and aggregated, and the resulting prototype application was pilot tested for feasibility. A practical model was then created by separating application activities into distinct software packages based on existing PBRN business rules, hardware requirements, network requirements, and security concerns. We present an information architecture that provides for essential interactions, activities, data flows, and structural elements necessary for providing support for PBRN translational research activities. The model describes research information exchange between investigators and clusters of independent data sites supported by a contracted research director. The model was designed to support recruitment for clinical trials, collection of aggregated anonymous data, and retrieval of identifiable data from previously consented patients across hundreds of practices. The proposed model advances our understanding of the fundamental roles and activities of PBRNs and defines the information exchange commonly used by PBRNs to successfully engage community health care clinicians in translational research activities. By describing the network architecture in a language familiar to that used by software developers, the model provides an important foundation for the development of electronic support for essential PBRN research activities.

  8. Farmers' climate information needs for long-term adaptive decisions: A case study of almonds in CA

    NASA Astrophysics Data System (ADS)

    Jagannathan, K. A.; Jones, A. D.; Pathak, T. B.; Kerr, A. C.; Doll, D.

    2016-12-01

    Despite advances in climate modeling and projections, several sources report that current tools and models are not widely used in the agriculture sector. Farmers, depending on their local context, require information on very specific climatic metrics such as start of rains during the planting season, number of low temperature days during the growing season, etc. However, such specific climatic information is either not available, and/or is not synthesized and communicated in a manner that is accessible to these decision-makers. This research aims to bridge the gap between climate information and decision-making needs, by providing an improved understanding of what farmers' consider as relevant climate information, and how these needs compare with current modeling capabilities. Almond is a perennial crop, so any changes in climate within its 25-30 year lifetime can have an adverse impact on crop yield. This makes almond growers vulnerable to medium and long-term climate change. Hence, providing appropriate information on future climate projections can help guide their decisions on crop types & varieties, as well as management practices that are better adapted to future climatic conditions. Semi-structured exploratory interviews have been conducted with almond growers, farm advisors, and other industry stakeholders, with three goals: (1) to understand how growers have used climate information in the past; (2) to identify key climatic variables that are relevant - including appropriate temporal scales and acceptable uncertainty levels; and (3) to understand communication methods that could improve the usability of climate information for farm-level decision-making. The interviews showcased a great diversity amongst growers in terms of how they used weather/climate information. Discussions also indicated that there was a potential for climate information to impact long-term decisions, but only if it is provided within the right context, terminology, and communication channels. The findings offer valuable bottom-up insights into farmers' perspectives on relevance of climate information. These results will also be compared with current modeling capabilities in order to synthesize conclusions for improving the usability of climate science for agricultural decision-makers.

  9. Evolution of the archaeal and mammalian information processing systems: towards an archaeal model for human disease.

    PubMed

    Lyu, Zhe; Whitman, William B

    2017-01-01

    Current evolutionary models suggest that Eukaryotes originated from within Archaea instead of being a sister lineage. To test this model of ancient evolution, we review recent studies and compare the three major information processing subsystems of replication, transcription and translation in the Archaea and Eukaryotes. Our hypothesis is that if the Eukaryotes arose within the archaeal radiation, their information processing systems will appear to be one of kind and not wholly original. Within the Eukaryotes, the mammalian or human systems are emphasized because of their importance in understanding health. Biochemical as well as genetic studies provide strong evidence for the functional similarity of archaeal homologs to the mammalian information processing system and their dissimilarity to the bacterial systems. In many independent instances, a simple archaeal system is functionally equivalent to more elaborate eukaryotic homologs, suggesting that evolution of complexity is likely an central feature of the eukaryotic information processing system. Because fewer components are often involved, biochemical characterizations of the archaeal systems are often easier to interpret. Similarly, the archaeal cell provides a genetically and metabolically simpler background, enabling convenient studies on the complex information processing system. Therefore, Archaea could serve as a parsimonious and tractable host for studying human diseases that arise in the information processing systems.

  10. Extending 3D city models with legal information

    NASA Astrophysics Data System (ADS)

    Frank, A. U.; Fuhrmann, T.; Navratil, G.

    2012-10-01

    3D city models represent existing physical objects and their topological and functional relations. In everyday life the rights and responsibilities connected to these objects, primarily legally defined rights and obligations but also other socially and culturally established rights, are of importance. The rights and obligations are defined in various laws and it is often difficult to identify the rules applicable for a certain case. The existing 2D cadastres show civil law rights and obligations and plans to extend them to provide information about public law restrictions for land use are in several countries under way. It is tempting to design extensions to the 3D city models to provide information about legal rights in 3D. The paper analyses the different types of information that are needed to reduce conflicts and to facilitate decisions about land use. We identify the role 3D city models augmented with planning information in 3D can play, but do not advocate a general conversion from 2D to 3D for the legal cadastre. Space is not anisotropic and the up/down dimension is practically very different from the two dimensional plane - this difference must be respected when designing spatial information systems. The conclusions are: (1) continue the current regime for ownership of apartments, which is not ownership of a 3D volume, but co-ownership of a building with exclusive use of some rooms; such exclusive use rights could be shown in a 3D city model; (2) ownership of 3D volumes for complex and unusual building situations can be reported in a 3D city model, but are not required everywhere; (3) indicate restrictions for land use and building in 3D city models, with links to the legal sources.

  11. MAGDM linear-programming models with distinct uncertain preference structures.

    PubMed

    Xu, Zeshui S; Chen, Jian

    2008-10-01

    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  12. Comparison of Nurse Staffing Measurements in Staffing-Outcomes Research.

    PubMed

    Park, Shin Hye; Blegen, Mary A; Spetz, Joanne; Chapman, Susan A; De Groot, Holly A

    2015-01-01

    Investigators have used a variety of operational definitions of nursing hours of care in measuring nurse staffing for health services research. However, little is known about which approach is best for nurse staffing measurement. To examine whether various nursing hours measures yield different model estimations when predicting patient outcomes and to determine the best method to measure nurse staffing based on the model estimations. We analyzed data from the University HealthSystem Consortium for 2005. The sample comprised 208 hospital-quarter observations from 54 hospitals, representing information on 971 adult-care units and about 1 million inpatient discharges. We compared regression models using different combinations of staffing measures based on productive/nonproductive and direct-care/indirect-care hours. Akaike Information Criterion and Bayesian Information Criterion were used in the assessment of staffing measure performance. The models that included the staffing measure calculated from productive hours by direct-care providers were best, in general. However, the Akaike Information Criterion and Bayesian Information Criterion differences between models were small, indicating that distinguishing nonproductive and indirect-care hours from productive direct-care hours does not substantially affect the approximation of the relationship between nurse staffing and patient outcomes. This study is the first to explicitly evaluate various measures of nurse staffing. Productive hours by direct-care providers are the strongest measure related to patient outcomes and thus should be preferred in research on nurse staffing and patient outcomes.

  13. Flexing dual-systems models: How variable cognitive control in children informs our understanding of risk-taking across development.

    PubMed

    Li, Rosa

    2017-10-01

    Prevailing models of the development of decision-making propose that peak risk-taking occurs in adolescence due to a neural imbalance between two processes: gradual, linearly developing cognitive control and rapid, non-linearly developing reward-processing. Though many studies have found neural evidence supporting this dual-systems imbalance model, its behavioral predictions have been surprisingly difficult to document. Most laboratory studies have not found adolescents to exhibit greater risk-taking than children, and public health data show everyday risk-taking to peak in late adolescence/early adulthood. Moreover, when adolescents are provided detailed information about decision options and consequences, they evince similar behavior to adults. Such findings point to a critical feature of the development of decision-making that is missed by imbalance models. Specifically, the engagement of cognitive control is context dependent, such that cognitive control and therefore advantageous decision-making increases when available information is high and decreases when available information is low. Furthermore, the context dependence of cognitive control varies across development, such that increased information availability benefits children more than adolescents, who benefit more than adults. This review advances a flexible dual-systems model that is only imbalanced under certain conditions; explains disparities between neural, behavioral, and public health findings; and provides testable hypotheses for future research. Copyright © 2017 The Author. Published by Elsevier Ltd.. All rights reserved.

  14. Satellite data driven modeling system for predicting air quality and visibility during wildfire and prescribed burn events

    NASA Astrophysics Data System (ADS)

    Nair, U. S.; Keiser, K.; Wu, Y.; Maskey, M.; Berendes, D.; Glass, P.; Dhakal, A.; Christopher, S. A.

    2012-12-01

    The Alabama Forestry Commission (AFC) is responsible for wildfire control and also prescribed burn management in the state of Alabama. Visibility and air quality degradation resulting from smoke are two pieces of information that are crucial for this activity. Currently the tools available to AFC are the dispersion index available from the National Weather Service and also surface smoke concentrations. The former provides broad guidance for prescribed burning activities but does not provide specific information regarding smoke transport, areas affected and quantification of air quality and visibility degradation. While the NOAA operational air quality guidance includes surface smoke concentrations from existing fire events, it does not account for contributions from background aerosols, which are important for the southeastern region including Alabama. Also lacking is the quantification of visibility. The University of Alabama in Huntsville has developed a state-of-the-art integrated modeling system to address these concerns. This system based on the Community Air Quality Modeling System (CMAQ) that ingests satellite derived smoke emissions and also assimilates NASA MODIS derived aerosol optical thickness. In addition, this operational modeling system also simulates the impact of potential prescribed burn events based on location information derived from the AFC prescribed burn permit database. A lagrangian model is used to simulate smoke plumes for the prescribed burns requests. The combined air quality and visibility degradation resulting from these smoke plumes and background aerosols is computed and the information is made available through a web based decision support system utilizing open source GIS components. This system provides information regarding intersections between highways and other critical facilities such as old age homes, hospitals and schools. The system also includes satellite detected fire locations and other satellite derived datasets relevant for fire and smoke management.

  15. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning

    PubMed Central

    Matsunaga, Yasuhiro

    2018-01-01

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137

  16. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning.

    PubMed

    Matsunaga, Yasuhiro; Sugita, Yuji

    2018-05-03

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.

  17. A Journey in Standard Development: The Core Manufacturing Simulation Data (CMSD) Information Model.

    PubMed

    Lee, Yung-Tsun Tina

    2015-01-01

    This report documents a journey "from research to an approved standard" of a NIST-led standard development activity. That standard, Core Manufacturing Simulation Data (CMSD) information model, provides neutral structures for the efficient exchange of manufacturing data in a simulation environment. The model was standardized under the auspices of the international Simulation Interoperability Standards Organization (SISO). NIST started the research in 2001 and initiated the standardization effort in 2004. The CMSD standard was published in two SISO Products. In the first Product, the information model was defined in the Unified Modeling Language (UML) and published in 2010 as SISO-STD-008-2010. In the second Product, the information model was defined in Extensible Markup Language (XML) and published in 2013 as SISO-STD-008-01-2012. Both SISO-STD-008-2010 and SISO-STD-008-01-2012 are intended to be used together.

  18. Terminology model discovery using natural language processing and visualization techniques.

    PubMed

    Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol

    2006-12-01

    Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.

  19. Epidemic model for information diffusion in web forums: experiments in marketing exchange and political dialog.

    PubMed

    Woo, Jiyoung; Chen, Hsinchun

    2016-01-01

    As social media has become more prevalent, its influence on business, politics, and society has become significant. Due to easy access and interaction between large numbers of users, information diffuses in an epidemic style on the web. Understanding the mechanisms of information diffusion through these new publication methods is important for political and marketing purposes. Among social media, web forums, where people in online communities disseminate and receive information, provide a good environment for examining information diffusion. In this paper, we model topic diffusion in web forums using the epidemiology model, the susceptible-infected-recovered (SIR) model, frequently used in previous research to analyze both disease outbreaks and knowledge diffusion. The model was evaluated on a large longitudinal dataset from the web forum of a major retail company and from a general political discussion forum. The fitting results showed that the SIR model is a plausible model to describe the diffusion process of a topic. This research shows that epidemic models can expand their application areas to topic discussion on the web, particularly social media such as web forums.

  20. Habitat Suitability Index Models: Beaver

    USGS Publications Warehouse

    Allen, Arthur W.

    1982-01-01

    Habitat preferences of the beaver (Castor canadensis) are described in this publication, which is one of a series of Habitat Suitability Index (HSI) models. Habitat use information is presented in a synthesis of the literature on the species-habitat requirements of the beaver, followed by the development of the HSI model. The model is designed to provide information for use in impact assessment and habitat management activities, and should be used in conjunction with habitat evaluation procedures previously developed by the Fish and Wildlife Service. This revised model updates the original publication dated September 1982.

  1. A Report on the Validation of Beryllium Strength Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Derek Elswick

    2016-02-05

    This report discusses work on validating beryllium strength models with flyer plate and Taylor rod experimental data. Strength models are calibrated with Hopkinson bar and quasi-static data. The Hopkinson bar data for beryllium provides strain rates up to about 4000 per second. A limitation of the Hopkinson bar data for beryllium is that it only provides information on strain up to about 0.15. The lack of high strain data at high strain rates makes it difficult to distinguish between various strength model settings. The PTW model has been calibrated many different times over the last 12 years. The lack ofmore » high strain data for high strain rates has resulted in these calibrated PTW models for beryllium exhibiting significantly different behavior when extrapolated to high strain. For beryllium, the α parameter of PTW has recently been calibrated to high precision shear modulus data. In the past the α value for beryllium was set based on expert judgment. The new α value for beryllium was used in a calibration of the beryllium PTW model by Sky Sjue. The calibration by Sjue used EOS table information to model the temperature dependence of the heat capacity. Also, the calibration by Sjue used EOS table information to model the density changes of the beryllium sample during the Hopkinson bar and quasi-static experiments. In this paper, the calibrated PTW model by Sjue is compared against experimental data and other strength models. The other strength models being considered are a PTW model calibrated by Shuh- Rong Chen and a Steinberg-Guinan type model by John Pedicini. The three strength models are used in a comparison against flyer plate and Taylor rod data. The results show that the Chen PTW model provides better agreement to this data. The Chen PTW model settings have been previously adjusted to provide a better fit to flyer plate data, whereas the Sjue PTW model has not been changed based on flyer plate data. However, the Sjue model provides a reasonable fit to flyer plate and Taylor rod data, and also gives a better match to recently analyzed Z-machine data which has a strain of about 0.35 and a strain rate of 3e5 s -1.« less

  2. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  3. Digital modulation and achievable information rates of thru-body haptic communications

    NASA Astrophysics Data System (ADS)

    Hanisch, Natalie; Pierobon, Massimiliano

    2017-05-01

    The ever increasing biocompatibility and pervasive nature of wearable and implantable devices demand novel sustainable solutions to realize their connectivity, which can impact broad application scenarios such as in the defense, biomedicine, and entertainment fields. Where wireless electromagnetic communications are facing challenges such as device miniaturization, energy scarcity, limited range, and possibility of interception, solutions not only inspired but also based on natural communication means might result into valid alternatives. In this paper, a communication paradigm where digital information is propagated through the nervous system is proposed and analyzed on the basis of achievable information rates. In particular, this paradigm is based on an analytical framework where the response of a system based on haptic (tactile) information transmission and ElectroEncephaloGraphy (EEG)-based reception is modeled and characterized. Computational neuroscience models of the somatosensory signal representation in the brain, coupled with models of the generation and propagation of somatosensory stimulation from skin mechanoreceptors, are employed in this paper to provide a proof-of-concept evaluation of achievable performance in encoding information bits into tactile stimulation, and decoding them from the recorded brain activity. Based on these models, the system is simulated and the resulting data are utilized to train a Support Vector Machine (SVM) classifier, which is finally used to provide a proof-of-concept validation of the system performance in terms of information rates against bit error probability at the reception.

  4. Emergency response and field observation activities of geoscientists in California (USA) during the September 29, 2009, Samoa Tsunami

    NASA Astrophysics Data System (ADS)

    Wilson, Rick I.; Dengler, Lori A.; Goltz, James D.; Legg, Mark R.; Miller, Kevin M.; Ritchie, Andy; Whitmore, Paul M.

    2011-07-01

    State geoscientists (geologists, geophysicists, seismologists, and engineers) in California work closely with federal, state and local government emergency managers to help prepare coastal communities for potential impacts from a tsunami before, during, and after an event. For teletsunamis, as scientific information (forecast model wave heights, first-wave arrival times, etc.) from NOAA's West Coast and Alaska Tsunami Warning Center is made available, federal- and state-level emergency managers must help convey this information in a concise, comprehensible and timely manner to local officials who ultimately determine the appropriate response activities for their jurisdictions. During the September 29, 2009 Tsunami Advisory for California, government geoscientists assisted the California Emergency Management Agency by providing technical assistance during teleconference meetings with NOAA and other state and local emergency managers prior to the arrival of the tsunami. This technical assistance included background information on anticipated tidal conditions when the tsunami was set to arrive, wave height estimates from state-modeled scenarios for areas not covered by NOAA's forecast models, and clarifying which regions of the state were at greatest risk. Over the last year, state geoscientists have started to provide additional assistance: 1) working closely with NOAA to simplify their tsunami alert messaging and expand their forecast modeling coverage; 2) creating "playbooks" containing information from existing tsunami scenarios for local emergency managers to reference during an event; and, 3) developing a state-level information "clearinghouse" and pre-tsunami field response team to assist local officials as well as observe and report tsunami effects. Activities of geoscientists were expanded during the more recent Tsunami Advisory on February 27, 2010, including deploying a geologist from the California Geological Survey as a field observer who provided information back to emergency managers.

  5. A Neural Network Approach to Intention Modeling for User-Adapted Conversational Agents

    PubMed Central

    Griol, David

    2016-01-01

    Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of the user's intention during the dialogue and uses this prediction to dynamically adapt the dialogue model of the system taking into consideration the user's needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue system that facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, and the quality perceived by the users. PMID:26819592

  6. Developing a general conceptual framework for avian conservation science

    USGS Publications Warehouse

    Sauer, J.R.

    2003-01-01

    Avian conservation science in North America has produced a variety of monitoring programs designed to provide information on population status of birds. Waterfowl surveys provide population estimates for breeding ducks over most of the continent, the North American Breeding Bird Survey (BBS) provides indexes to population change for >400 breeding bird species, and many other surveys exist that index bird populations at a variety of scales and seasons. However, many fundamental questions about bird population change remain unanswered. I suggest that analyses of monitoring data provide limited understanding of causes of population change, and that the declining species paradigm (Caughley 1994) is sometimes an inefficient approach to increasing our understanding of causes of population change. In North America, the North American Bird Conservation Initiative (NABCI) provides an opportunity to implement alternative approaches that use management, modeling of population responses to management, and monitoring in combination to increase our understanding of bird populations. In adaptive resources management, modeling provides predictions about consequences of management, and monitoring data allow us to assess the population consequences of management. In this framework, alternative hypotheses about response of populations to management can be evaluated by formulating a series of models with differing structure, and management and monitoring provide information about which model best predicts population response.

  7. Small interstellar molecules and what they tell us

    NASA Astrophysics Data System (ADS)

    Neufeld, David A.

    2018-06-01

    Observations at ultraviolet, visible, infrared and radio wavelengths provide a wealth of information about the molecular inventory of the interstellar medium (ISM). Because of the different chemical pathways responsible for their formation and destruction, different molecules probe specific aspects of the interstellar environment. Carefully interpreted with the use of astrochemical models, they provide unique information of general astrophysical importance, yielding estimates of the cosmic ray density, the molecular fraction, the ultraviolet radiation field, and the dissipation of energy within the turbulent ISM. Laboratory experiments and quantum-mechanical calculations are essential both in providing the spectroscopic data needed to identify interstellar molecules and for elucidating the fundamental physical and chemical processes that must be included in astrochemical models.

  8. A portal for the ocean biogeographic information system

    USGS Publications Warehouse

    Zhang, Yunqing; Grassle, J. F.

    2002-01-01

    Since its inception in 1999 the Ocean Biogeographic Information System (OBIS) has developed into an international science program as well as a globally distributed network of biogeographic databases. An OBIS portal at Rutgers University provides the links and functional interoperability among member database systems. Protocols and standards have been established to support effective communication between the portal and these functional units. The portal provides distributed data searching, a taxonomy name service, a GIS with access to relevant environmental data, biological modeling, and education modules for mariners, students, environmental managers, and scientists. The portal will integrate Census of Marine Life field projects, national data archives, and other functional modules, and provides for network-wide analyses and modeling tools.

  9. Towards a Global Service Registry for the World-Wide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro

    2014-06-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.

  10. Factors modulating social influence on spatial choice in rats.

    PubMed

    Bisbing, Teagan A; Saxon, Marie; Sayde, Justin M; Brown, Michael F

    2015-07-01

    Three experiments examined the conditions under which the spatial choices of rats searching for food are influenced by the choices made by other rats. Model rats learned a consistent set of baited locations in a 5 × 5 matrix of locations, some of which contained food. In Experiment 1, subject rats could determine the baited locations after choosing 1 location because all of the baited locations were on the same side of the matrix during each trial (the baited side varied over trials). Under these conditions, the social cues provided by the model rats had little or no effect on the choices made by the subject rats. The lack of social influence on choices occurred despite a simultaneous social influence on rats' location in the testing arena (Experiment 2). When the outcome of the subject rats' own choices provided no information about the positions of other baited locations, on the other hand, social cues strongly controlled spatial choices (Experiment 3). These results indicate that social information about the location of food influences spatial choices only when those cues provide valid information that is not redundant with the information provided by other cues. This suggests that social information is learned about, processed, and controls behavior via the same mechanisms as other kinds of stimuli. (c) 2015 APA, all rights reserved).

  11. Building a foundation for continued dialogue between climate science and water resource communities

    NASA Astrophysics Data System (ADS)

    Vano, J. A.; Arnold, J.; Clark, M. P.; Gutmann, E. D.; Hamman, J.; Nijssen, B.; Wood, A.

    2017-12-01

    Research into climate change has led to the development of many global climate models, downscaling techniques, and impacts models. This proliferation of information has resulted in insights into how climate change will impact hydrology that are more robust than any single approach, which is helpful for advancing the science. However, the variety of approaches makes navigating what information to use in water resource planning and management challenging. Each technique has strengths and weaknesses and associated uncertainties, and approaches are always being updated. Here we provide a user-focused, modularly framed guidance that is designed to be expandable and where updates can be targeted. This includes describing dos and don'ts for how to use climate change information in water resource planning and management that can be read at multiple levels. It can provide context for those seeking to understand the general need, opportunities, and challenges of including climate change information. It also provides details (frequently asked questions and examples) and direction to further guidance and resources for those engaged in the technical work. This guidance is intended to provide a foundation for continued dialogue within and between the climate science and application communities, to increase the utility and appropriate use of climate change information.

  12. Improvement of sand filter and constructed wetland design using an environmental decision support system.

    PubMed

    Turon, Clàudia; Comas, Joaquim; Torrens, Antonina; Molle, Pascal; Poch, Manel

    2008-01-01

    With the aim of improving effluent quality of waste stabilization ponds, different designs of vertical flow constructed wetlands and intermittent sand filters were tested on an experimental full-scale plant within the framework of a European project. The information extracted from this study was completed and updated with heuristic and bibliographic knowledge. The data and knowledge acquired were difficult to integrate into mathematical models because they involve qualitative information and expert reasoning. Therefore, it was decided to develop an environmental decision support system (EDSS-Filter-Design) as a tool to integrate mathematical models and knowledge-based techniques. This paper describes the development of this support tool, emphasizing the collection of data and knowledge and representation of this information by means of mathematical equations and a rule-based system. The developed support tool provides the main design characteristics of filters: (i) required surface, (ii) media type, and (iii) media depth. These design recommendations are based on wastewater characteristics, applied load, and required treatment level data provided by the user. The results of the EDSS-Filter-Design provide appropriate and useful information and guidelines on how to design filters, according to the expert criteria. The encapsulation of the information into a decision support system reduces the design period and provides a feasible, reasoned, and positively evaluated proposal.

  13. BIM and IoT: A Synopsis from GIS Perspective

    NASA Astrophysics Data System (ADS)

    Isikdag, U.

    2015-10-01

    Internet-of-Things (IoT) focuses on enabling communication between all devices, things that are existent in real life or that are virtual. Building Information Models (BIMs) and Building Information Modelling is a hype that has been the buzzword of the construction industry for last 15 years. BIMs emerged as a result of a push by the software companies, to tackle the problems of inefficient information exchange between different software and to enable true interoperability. In BIM approach most up-to-date an accurate models of a building are stored in shared central databases during the design and the construction of a project and at post-construction stages. GIS based city monitoring / city management applications require the fusion of information acquired from multiple resources, BIMs, City Models and Sensors. This paper focuses on providing a method for facilitating the GIS based fusion of information residing in digital building "Models" and information acquired from the city objects i.e. "Things". Once this information fusion is accomplished, many fields ranging from Emergency Response, Urban Surveillance, Urban Monitoring to Smart Buildings will have potential benefits.

  14. A Framework to Manage Information Models

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; King, T.; Crichton, D.; Walker, R.; Roberts, A.; Thieman, J.

    2008-05-01

    The Information Model is the foundation on which an Information System is built. It defines the entities to be processed, their attributes, and the relationships that add meaning. The development and subsequent management of the Information Model is the single most significant factor for the development of a successful information system. A framework of tools has been developed that supports the management of an information model with the rigor typically afforded to software development. This framework provides for evolutionary and collaborative development independent of system implementation choices. Once captured, the modeling information can be exported to common languages for the generation of documentation, application databases, and software code that supports both traditional and semantic web applications. This framework is being successfully used for several science information modeling projects including those for the Planetary Data System (PDS), the International Planetary Data Alliance (IPDA), the National Cancer Institute's Early Detection Research Network (EDRN), and several Consultative Committee for Space Data Systems (CCSDS) projects. The objective of the Space Physics Archive Search and Exchange (SPASE) program is to promote collaboration and coordination of archiving activity for the Space Plasma Physics community and ensure the compatibility of the architectures used for a global distributed system and the individual data centers. Over the past several years, the SPASE data model working group has made great progress in developing the SPASE Data Model and supporting artifacts including a data dictionary, XML Schema, and two ontologies. The authors have captured the SPASE Information Model in this framework. This allows the generation of documentation that presents the SPASE Information Model in object-oriented notation including UML class diagrams and class hierarchies. The modeling information can also be exported to semantic web languages such as OWL and RDF and written to XML Metadata Interchange (XMI) files for import into UML tools.

  15. The caCORE Software Development Kit: streamlining construction of interoperable biomedical information services.

    PubMed

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-06

    Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development.

  16. An Overview of the GIS Weasel

    USGS Publications Warehouse

    Viger, Roland J.

    2008-01-01

    This fact sheet provides a high-level description of the GIS Weasel, a software system designed to aid users in preparing spatial information as input to lumped and distributed parameter environmental simulation models (ESMs). The GIS Weasel provides geographic information system (GIS) tools to help create maps of geographic features relevant to the application of a user?s ESM and to generate parameters from those maps. The operation of the GIS Weasel does not require a user to be a GIS expert, only that a user has an understanding of the spatial information requirements of the model. The GIS Weasel software system provides a GIS-based graphical user interface (GUI), C programming language executables, and general utility scripts. The software will run on any computing platform where ArcInfo Workstation (version 8.1 or later) and the GRID extension are accessible. The user controls the GIS Weasel by interacting with menus, maps, and tables.

  17. Rehabilitation of compensable workplace injuries: effective payment models for quality vocational rehabilitation outcomes in a changing social landscape.

    PubMed

    Matthews, Lynda R; Hanley, Francine; Lewis, Virginia; Howe, Caroline

    2015-01-01

    With social and economic costs of workplace injury on the increase, efficient payment models that deliver quality rehabilitation outcomes are of increasing interest. This paper provides a perspective on the issue informed by both refereed literature and published research material not available commercially (gray literature). A review of payment models, workers' compensation and compensable injury identified relevant peer-reviewed and gray literature that informed our discussion. Fee-for-service and performance-based payment models dominate the health and rehabilitation literature, each described as having benefits and challenges to achieving quality outcomes for consumers. There appears to be a movement toward performance-based payments in compensable workplace injury settings as they are perceived to promote time-efficient services and support innovation in rehabilitation practice. However, it appears that the challenges that arise for workplace-based rehabilitation providers and professionals when working under the various payment models, such as staff retention and quality of client-practitioner relationship, are absent from the literature and this could lead to flawed policy decisions. Robust evidence of the benefits and costs associated with different payment models - from the perspectives of clients/consumers, funders and service providers - is needed to inform best practice in rehabilitation of compensable workplace injuries. Available but limited evidence suggests that payment models providing financial incentives for stakeholder-agreed vocational rehabilitation outcomes tend to improve service effectiveness in workers' compensation settings, although there is little evidence of service quality or client satisfaction. Working in a system that identifies payments for stakeholder-agreed outcomes may be more satisfying for rehabilitation practitioners in workers' compensation settings by allowing more clinical autonomy and innovative practice. Researchers need to work closely with the compensation and rehabilitation sector as well as governments to establish robust evidence of the benefits and costs of payment models, from the perspectives of clients/consumers, funders, service providers and rehabilitation professionals.

  18. Fundamentals of Modeling, Data Assimilation, and High-performance Computing

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.

    2005-01-01

    This lecture will introduce the concepts of modeling, data assimilation and high- performance computing as it relates to the study of atmospheric composition. The lecture will work from basic definitions and will strive to provide a framework for thinking about development and application of models and data assimilation systems. It will not provide technical or algorithmic information, leaving that to textbooks, technical reports, and ultimately scientific journals. References to a number of textbooks and papers will be provided as a gateway to the literature.

  19. How social information can improve estimation accuracy in human groups.

    PubMed

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  20. How social information can improve estimation accuracy in human groups

    PubMed Central

    Jayles, Bertrand; Kim, Hye-rin; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-01-01

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects’ sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. PMID:29118142

  1. Information and Innovation in Research Organizations.

    ERIC Educational Resources Information Center

    Baker, Norman R.; Freeland, James R.

    Empirical work in industrial research organizations has provided data to describe researcher behavior during innovation. Based on these data, the role of information during idea creation and submission is described. A model of a management information system, consistent with and supportive of researcher behavior, is structured to include technical…

  2. Nature's Notebook Provides Phenology Observations for NASA Juniper Phenology and Pollen Transport Project

    NASA Technical Reports Server (NTRS)

    Luval, J. C.; Crimmins, T. M.; Sprigg, W. A.; Levetin, E.; Huete, A.; Nickovic, S.; Prasad, A.; Vukovic, A.; VandeWater, P. K.; Budge, A. M.; hide

    2014-01-01

    Phenology Network has been established to provide national wide observations of vegetation phenology. However, as the Network is still in the early phases of establishment and growth, the density of observers is not yet adequate to sufficiently document the phenology variability over large regions. Hence a combination of satellite data and ground observations can provide optimal information regarding juniperus spp. pollen phenology. MODIS data was to observe Juniperus supp. pollen phenology. The MODIS surface reflectance product provided information on the Juniper supp. cone formation and cone density. Ground based observational records of pollen release timing and quantities were used as verification. Approximately 10, 818 records of juniper phenology for male cone formation Juniperus ashei., J. monosperma, J. scopulorum, and J. pinchotti were reported by Nature's Notebook observers in 2013 These observations provided valuable information for the analysis of satellite images for developing the pollen concentration masks for input into the PREAM (Pollen REgional Atmospheric Model) pollen transport model. The combination of satellite data and ground observations allowed us to improve our confidence in predicting pollen release and spread, thereby improving asthma and allergy alerts.

  3. An Evaluation of Understandability of Patient Journey Models in Mental Health

    PubMed Central

    2016-01-01

    Background There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. Objectives This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Method Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. Results The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. Conclusions The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers. PMID:27471006

  4. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    ERIC Educational Resources Information Center

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  5. Information Behavior and HIV Testing Intentions Among Young Men at Risk for HIV/AIDS

    PubMed Central

    Meadowbrooke, Chrysta C.; Veinot, Tiffany C.; Loveluck, Jimena; Hickok, Andrew; Bauermeister, José A.

    2014-01-01

    Health research shows that knowing about health risks may not translate into behavior change. However, such research typically operationalizes health information acquisition with knowledge tests. Information scientists who investigate socially embedded information behaviors could help improve understanding of potential associations between information behavior—as opposed to knowledge—and health behavior formation, thus providing new opportunities to investigate the effects of health information. We examine the associations between information behavior and HIV testing intentions among young men who have sex with men (YMSM), a group with high rates of unrecognized HIV infection. We used the theory of planned behavior (TPB) to predict intentions to seek HIV testing in an online sample of 163 YMSM. Multiple regression and recursive path analysis were used to test two models: (a) the basic TPB model and (b) an adapted model that added the direct effects of three information behaviors (information exposure, use of information to make HIV-testing decisions, prior experience obtaining an HIV test) plus self-rated HIV knowledge. As hypothesized, our adapted model improved predictions, explaining more than twice as much variance as the original TPB model. The results suggest that information behaviors may be more important predictors of health behavior intentions than previously acknowledged. PMID:25346934

  6. The Effect of Visual Information on the Manual Approach and Landing

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1982-01-01

    The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.

  7. EXpectation Propagation LOgistic REgRession (EXPLORER): Distributed Privacy-Preserving Online Model Learning

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Wu, Yuan; Cui, Lijuan; Cheng, Samuel; Ohno-Machado, Lucila

    2013-01-01

    We developed an EXpectation Propagation LOgistic REgRession (EXPLORER) model for distributed privacy-preserving online learning. The proposed framework provides a high level guarantee for protecting sensitive information, since the information exchanged between the server and the client is the encrypted posterior distribution of coefficients. Through experimental results, EXPLORER shows the same performance (e.g., discrimination, calibration, feature selection etc.) as the traditional frequentist Logistic Regression model, but provides more flexibility in model updating. That is, EXPLORER can be updated one point at a time rather than having to retrain the entire data set when new observations are recorded. The proposed EXPLORER supports asynchronized communication, which relieves the participants from coordinating with one another, and prevents service breakdown from the absence of participants or interrupted communications. PMID:23562651

  8. Implications of Information Theory for Computational Modeling of Schizophrenia.

    PubMed

    Silverstein, Steven M; Wibral, Michael; Phillips, William A

    2017-10-01

    Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory-such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio-can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.

  9. Implications of Information Theory for Computational Modeling of Schizophrenia

    PubMed Central

    Wibral, Michael; Phillips, William A.

    2017-01-01

    Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory—such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio—can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development. PMID:29601053

  10. External factors in hospital information system (HIS) adoption model: a case on Malaysia.

    PubMed

    Lee, Heng Wei; Ramayah, Thurasamy; Zakaria, Nasriah

    2012-08-01

    Studies related to healthcare ICT integration in Malaysia are relatively little, thus this paper provide a literature review of the integration of information and communication technologies (ICT) in the healthcare sector in Malaysia through the hospital information system (HIS). Our study emphasized on secondary data to investigate the factors related to ICT integration in healthcare through HIS. Therefore this paper aimed to gather an in depth understanding of issues related to HIS adoption, and contributing in fostering HIS adoption in Malaysia and other countries. This paper provides a direction for future research to study the correlation of factors affecting HIS adoption. Finally a research model is proposed using current adoption theories and external factors from human, technology, and organization perspectives.

  11. The use of psychosocial assessment following the Haiti earthquake in the development of the three-year emotional psycho-medical mental health and psychosocial support (EP-MMHPS) plan.

    PubMed

    Jordan, Karin

    2010-01-01

    This article provides information about the 2010 Haiti earthquake. An assessment model used by a crisis counselor responding to the earthquake is presented, focusing on the importance of gathering pre-deployment assessment and in-country assessment. Examples of the information gathered through the in-country assessment model from children, adolescents, and adults are presented. A brief overview of Haiti's three-year Emergency Psycho-Medical Mental Health and Psychosocial Support (EP-MMHPS) is provided. Finally, how the psychosocial manual developed after assessing 200 Haitian survivors through in-country assessment, and information gathered through pre-deployment assessment became part of the EP-MMHPS is offered.

  12. Ontological modeling of electronic health information exchange.

    PubMed

    McMurray, J; Zhu, L; McKillop, I; Chen, H

    2015-08-01

    Investments of resources to purposively improve the movement of information between health system providers are currently made with imperfect information. No inventories of system-level electronic health information flows currently exist, nor do measures of inter-organizational electronic information exchange. Using Protégé 4, an open-source OWL Web ontology language editor and knowledge-based framework, we formalized a model that decomposes inter-organizational electronic health information flow into derivative concepts such as diversity, breadth, volume, structure, standardization and connectivity. The ontology was populated with data from a regional health system and the flows were measured. Individual instance's properties were inferred from their class associations as determined by their data and object property rules. It was also possible to visualize interoperability activity for regional analysis and planning purposes. A property called Impact was created from the total number of patients or clients that a health entity in the region served in a year, and the total number of health service providers or organizations with whom it exchanged information in support of clinical decision-making, diagnosis or treatment. Identifying providers with a high Impact but low Interoperability score could assist planners and policy-makers to optimize technology investments intended to electronically share patient information across the continuum of care. Finally, we demonstrated how linked ontologies were used to identify logical inconsistencies in self-reported data for the study. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Using waveform information in nonlinear data assimilation

    NASA Astrophysics Data System (ADS)

    Rey, Daniel; Eldridge, Michael; Morone, Uriel; Abarbanel, Henry D. I.; Parlitz, Ulrich; Schumann-Bischoff, Jan

    2014-12-01

    Information in measurements of a nonlinear dynamical system can be transferred to a quantitative model of the observed system to establish its fixed parameters and unobserved state variables. After this learning period is complete, one may predict the model response to new forces and, when successful, these predictions will match additional observations. This adjustment process encounters problems when the model is nonlinear and chaotic because dynamical instability impedes the transfer of information from the data to the model when the number of measurements at each observation time is insufficient. We discuss the use of information in the waveform of the data, realized through a time delayed collection of measurements, to provide additional stability and accuracy to this search procedure. Several examples are explored, including a few familiar nonlinear dynamical systems and small networks of Colpitts oscillators.

  14. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    NASA Astrophysics Data System (ADS)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2018-06-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  15. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    NASA Astrophysics Data System (ADS)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2017-09-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  16. Including Fossils in Phylogenetic Climate Reconstructions: A Deep Time Perspective on the Climatic Niche Evolution and Diversification of Spiny Lizards (Sceloporus).

    PubMed

    Lawing, A Michelle; Polly, P David; Hews, Diana K; Martins, Emília P

    2016-08-01

    Fossils and other paleontological information can improve phylogenetic comparative method estimates of phenotypic evolution and generate hypotheses related to species diversification. Here, we use fossil information to calibrate ancestral reconstructions of suitable climate for Sceloporus lizards in North America. Integrating data from the fossil record, general circulation models of paleoclimate during the Miocene, climate envelope modeling, and phylogenetic comparative methods provides a geographically and temporally explicit species distribution model of Sceloporus-suitable habitat through time. We provide evidence to support the historic biogeographic hypothesis of Sceloporus diversification in warm North American deserts and suggest a relatively recent Sceloporus invasion into Mexico around 6 Ma. We use a physiological model to map extinction risk. We suggest that the number of hours of restriction to a thermal refuge limited Sceloporus from inhabiting Mexico until the climate cooled enough to provide suitable habitat at approximately 6 Ma. If the future climate returns to the hotter climates of the past, Mexico, the place of highest modern Sceloporus richness, will no longer provide suitable habitats for Sceloporus to survive and reproduce.

  17. On the predictability of land surface fluxes from meteorological variables

    NASA Astrophysics Data System (ADS)

    Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.

    2018-01-01

    Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.

  18. Modelling Mathematics Problem Solving Item Responses Using a Multidimensional IRT Model

    ERIC Educational Resources Information Center

    Wu, Margaret; Adams, Raymond

    2006-01-01

    This research examined students' responses to mathematics problem-solving tasks and applied a general multidimensional IRT model at the response category level. In doing so, cognitive processes were identified and modelled through item response modelling to extract more information than would be provided using conventional practices in scoring…

  19. Mobile-Based Dictionary of Information and Communication Technology

    NASA Astrophysics Data System (ADS)

    Liando, O. E. S.; Mewengkang, A.; Kaseger, D.; Sangkop, F. I.; Rantung, V. P.; Rorimpandey, G. C.

    2018-02-01

    This study aims to design and build mobile-based dictionary of information and communication technology applications to provide access to information in the form of glossary of terms in the context of information and communication technologies. Applications built in this study using the Android platform, with SQLite database model. This research uses prototype model development method which covers the stages of communication, Quick Plan, Quick Design Modeling, Construction of Prototype, Deployment Delivery & Feedback, and Full System Transformation. The design of this application is designed in such a way as to facilitate the user in the process of learning and understanding the new terms or vocabularies encountered in the world of information and communication technology. Mobile-based dictionary of Information And Communication Technology applications that have been built can be an alternative to learning literature. In its simplest form, this application is able to meet the need for a comprehensive and accurate dictionary of Information And Communication Technology function.

  20. Documentation of the Retail Price Model

    EPA Pesticide Factsheets

    The Retail Price Model (RPM) provides a first‐order estimate of average retail electricity prices using information from the EPA Base Case v.5.13 Base Case or other scenarios for each of the 64 Integrated Planing Model (IPM) regions.

  1. Development of an EVA systems cost model. Volume 3: EVA systems cost model

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The EVA systems cost model presented is based on proposed EVA equipment for the space shuttle program. General information on EVA crewman requirements in a weightless environment and an EVA capabilities overview are provided.

  2. 2018 Regional, State, and Local Modelers' Workshop

    EPA Pesticide Factsheets

    The 2018 Regional, State, and Local (RSL) Modelers' Workshop is being held at the EPA's Region 1 Offices in Boston, MA from June 5-7, 2018. This page provides information on the agenda and registration for the RSL Modelers' Workshop.

  3. The simplest acquisition protocol is sometimes the best protocol: performing and learning a 1:2 bimanual coordination task.

    PubMed

    Panzer, Stefan; Kennedy, Deanna; Wang, Chaoyi; Shea, Charles H

    2018-02-01

    An experiment was conducted to determine if the performance and learning of a multi-frequency (1:2) coordination pattern between the limbs are enhanced when a model is provided prior to each acquisition trial. Research has indicated very effective performance of a wide variety of bimanual coordination tasks when Lissajous plots with goal templates are provided, but this research has also found that participants become dependent on this information and perform quite poorly when it is withdrawn. The present experiment was designed to test three forms of modeling (Lissajous with template, Lissajous without template, and limb model), but in each situations, the model was presented prior to practice and not available during the performance of the task. This was done to decrease dependency on the model and increase the development of an internal reference of correctness that could be applied on test trials. A control condition was also collected, where a metronome was used to guide the movement. Following less than 7 min of practice, participants in the three modeling conditions performed the first test block very effectively; however, performance of the control condition was quite poor. Note that Test 1 was performed under the same conditions as used during acquisition. Test 2 was conducted with no augmented information provided prior to or during the performance of the task. Only participants in the limb model condition were able to maintain performance on Test 2. The findings suggest that a very simple intuitive display can provide the necessary information to form an effective internal representation of the coordination pattern which can be used guide performance when the augmented display is withdrawn.

  4. A General Approach for Specifying Informative Prior Distributions for PBPK Model Parameters

    EPA Science Inventory

    Characterization of uncertainty in model predictions is receiving more interest as more models are being used in applications that are critical to human health. For models in which parameters reflect biological characteristics, it is often possible to provide estimates of paramet...

  5. Competition for resources can explain patterns of social and individual learning in nature.

    PubMed

    Smolla, Marco; Gilman, R Tucker; Galla, Tobias; Shultz, Susanne

    2015-09-22

    In nature, animals often ignore socially available information despite the multiple theoretical benefits of social learning over individual trial-and-error learning. Using information filtered by others is quicker, more efficient and less risky than randomly sampling the environment. To explain the mix of social and individual learning used by animals in nature, most models penalize the quality of socially derived information as either out of date, of poor fidelity or costly to acquire. Competition for limited resources, a fundamental evolutionary force, provides a compelling, yet hitherto overlooked, explanation for the evolution of mixed-learning strategies. We present a novel model of social learning that incorporates competition and demonstrates that (i) social learning is favoured when competition is weak, but (ii) if competition is strong social learning is favoured only when resource quality is highly variable and there is low environmental turnover. The frequency of social learning in our model always evolves until it reduces the mean foraging success of the population. The results of our model are consistent with empirical studies showing that individuals rely less on social information where resources vary little in quality and where there is high within-patch competition. Our model provides a framework for understanding the evolution of social learning, a prerequisite for human cumulative culture. © 2015 The Author(s).

  6. Competition for resources can explain patterns of social and individual learning in nature

    PubMed Central

    Smolla, Marco; Gilman, R. Tucker; Galla, Tobias; Shultz, Susanne

    2015-01-01

    In nature, animals often ignore socially available information despite the multiple theoretical benefits of social learning over individual trial-and-error learning. Using information filtered by others is quicker, more efficient and less risky than randomly sampling the environment. To explain the mix of social and individual learning used by animals in nature, most models penalize the quality of socially derived information as either out of date, of poor fidelity or costly to acquire. Competition for limited resources, a fundamental evolutionary force, provides a compelling, yet hitherto overlooked, explanation for the evolution of mixed-learning strategies. We present a novel model of social learning that incorporates competition and demonstrates that (i) social learning is favoured when competition is weak, but (ii) if competition is strong social learning is favoured only when resource quality is highly variable and there is low environmental turnover. The frequency of social learning in our model always evolves until it reduces the mean foraging success of the population. The results of our model are consistent with empirical studies showing that individuals rely less on social information where resources vary little in quality and where there is high within-patch competition. Our model provides a framework for understanding the evolution of social learning, a prerequisite for human cumulative culture. PMID:26354936

  7. Ground-water models for water resources planning

    USGS Publications Warehouse

    Moore, John E.

    1980-01-01

    In the past decade hydrologists have emphasized the development of computer-based mathematical models to aid in the understanding of flow, the transport of solutes, transport of heat, and deformation in the groundwater system. These models have been used to provide information and predictions for water managers. Too frequently, groundwater was neglected in water-resource planning because managers believed that it could not be adequately evaluated in terms of availability, quality, and effect of development on surface water supplies. Now, however, with newly developed digital groundwater models, effects of development can be predicted. Such models have been used to predict hydrologic and quality changes under different stresses. These models have grown in complexity over the last 10 years from simple one-layer flow models to three-dimensional simulations of groundwater flow which may include solute transport, heat transport, effects of land subsidence, and encroachment of salt water. This paper illustrates, through case histories, how predictive groundwater models have provided the information needed for the sound planning and management of water resources in the United States. (USGS)

  8. Diagnostic causal reasoning with verbal information.

    PubMed

    Meder, Björn; Mayrhofer, Ralf

    2017-08-01

    In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people's inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., "X occasionally causes A"). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes' prior probabilities and the effects' likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Clustering and Bayesian hierarchical modeling for the definition of informative prior distributions in hydrogeology

    NASA Astrophysics Data System (ADS)

    Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.

    2017-12-01

    In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.

  10. Model architecture of intelligent data mining oriented urban transportation information

    NASA Astrophysics Data System (ADS)

    Yang, Bogang; Tao, Yingchun; Sui, Jianbo; Zhang, Feizhou

    2007-06-01

    Aiming at solving practical problems in urban traffic, the paper presents model architecture of intelligent data mining from hierarchical view. With artificial intelligent technologies used in the framework, the intelligent data mining technology improves, which is more suitable for the change of real-time road condition. It also provides efficient technology support for the urban transport information distribution, transmission and display.

  11. An Assessment of Feedback Procedures and Information Provided to Instructors within Computer Managed Learning Environments--Implications for Instruction and Software Redesign.

    ERIC Educational Resources Information Center

    Kotesky, Arturo A.

    Feedback procedures and information provided to instructors within computer managed learning environments were assessed to determine current usefulness and meaningfulness to users, and to present the design of a different instructor feedback instrument. Kaufman's system model was applied to accomplish the needs assessment phase of the study; and…

  12. Landfill gas control at military installations. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafer, R.A.; Renta-Babb, A.; Bandy, J.T.

    1984-01-01

    This report provides information useful to Army personnel responsible for recognizing and solving potential problems from gas generated by landfills. Information is provided on recognizing and gauging the magnitude of landfill gas problems; selecting appropriate gas control strategies, procedures, and equipment; use of computer modeling to predict gas production and migration and the success of gas control devices; and safety considerations.

  13. Content Analysis of "School Psychology International", 1990-2011: An Analysis of Trends and Compatibility with the NASP Practice Model

    ERIC Educational Resources Information Center

    Little, Steven G.; Akin-Little, Angeleque; Lloyd, Keryn

    2011-01-01

    Formal analysis of research publications serves as one indicator of the current status of a profession or a journal. Content analyses provide both practitioners and academicians with information on the status of research in the profession. These types of analyses can also provide information on the concordance between published research and what…

  14. Where Do We Go from Here? A National Model for Meeting the Informational Needs of the Unemployed.

    ERIC Educational Resources Information Center

    Kopecky, Robert J.

    The "Where Do We Go from Here?" conferences provide information to meet the long-term needs of unemployed workers (career change and retraining) and the short-term needs of coping with the psychological problems of being out-of-work. The conference presentation includes 70 to 100 volunteer professionals, providing optimistic and motivational…

  15. Addressing Early Life Sensitivity Using Physiologically Based Pharmacokinetic Modeling and In Vitro to In Vivo Extrapolation

    PubMed Central

    Yoon, Miyoung; Clewell, Harvey J.

    2016-01-01

    Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited. PMID:26977255

  16. Addressing Early Life Sensitivity Using Physiologically Based Pharmacokinetic Modeling and In Vitro to In Vivo Extrapolation.

    PubMed

    Yoon, Miyoung; Clewell, Harvey J

    2016-01-01

    Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited.

  17. Pathway index models for construction of patient-specific risk profiles.

    PubMed

    Eng, Kevin H; Wang, Sijian; Bradley, William H; Rader, Janet S; Kendziorski, Christina

    2013-04-30

    Statistical methods for variable selection, prediction, and classification have proven extremely useful in moving personalized genomics medicine forward, in particular, leading to a number of genomic-based assays now in clinical use for predicting cancer recurrence. Although invaluable in individual cases, the information provided by these assays is limited. Most often, a patient is classified into one of very few groups (e.g., recur or not), limiting the potential for truly personalized treatment. Furthermore, although these assays provide information on which individuals are at most risk (e.g., those for which recurrence is predicted), they provide no information on the aberrant biological pathways that give rise to the increased risk. We have developed an approach to address these limitations. The approach models a time-to-event outcome as a function of known biological pathways, identifies important genomic aberrations, and provides pathway-based patient-specific assessments of risk. As we demonstrate in a study of ovarian cancer from The Cancer Genome Atlas project, the patient-specific risk profiles are powerful and efficient characterizations useful in addressing a number of questions related to identifying informative patient subtypes and predicting survival. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Online Information Sharing About Risks: The Case of Organic Food.

    PubMed

    Hilverda, Femke; Kuttschreuter, Margôt

    2018-03-23

    Individuals have to make sense of an abundance of information to decide whether or not to purchase certain food products. One of the means to sense-making is information sharing. This article reports on a quantitative study examining online information sharing behavior regarding the risks of organic food products. An online survey among 535 respondents was conducted in the Netherlands to examine the determinants of information sharing behavior, and their relationships. Structural equation modeling was applied to test both the measurement model and the structural model. Results showed that the intention to share information online about the risks of organic food was low. Conversations and email were the preferred channels to share information; of the social media Facebook stood out. The developed model was found to provide an adequate description of the data. It explained 41% of the variance in information sharing. Injunctive norms and outcome expectancies were most important in predicting online information sharing, followed by information-related determinants. Risk-perception-related determinants showed a significant, but weak, positive relationship with online information sharing. Implications for authorities communicating on risks associated with food are addressed. © 2018 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  19. Information matrix estimation procedures for cognitive diagnostic models.

    PubMed

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  20. Future-year ozone prediction for the United States using updated models and inputs.

    PubMed

    Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun

    2017-08-01

    The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.

  1. Modeling and visualizing borehole information on virtual globes using KML

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing

    2014-01-01

    Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.

  2. Scaling the Information Processing Demands of Occupations

    ERIC Educational Resources Information Center

    Haase, Richard F.; Jome, LaRae M.; Ferreira, Joaquim Armando; Santos, Eduardo J. R.; Connacher, Christopher C.; Sendrowitz, Kerrin

    2011-01-01

    The purpose of this study was to provide additional validity evidence for a model of person-environment fit based on polychronicity, stimulus load, and information processing capacities. In this line of research the confluence of polychronicity and information processing (e.g., the ability of individuals to process stimuli from the environment…

  3. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    NASA Technical Reports Server (NTRS)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  4. Providing effective trauma care: the potential for service provider views to enhance the quality of care (qualitative study nested within a multicentre longitudinal quantitative study)

    PubMed Central

    Beckett, Kate; Earthy, Sarah; Sleney, Jude; Barnes, Jo; Kellezi, Blerina; Barker, Marcus; Clarkson, Julie; Coffey, Frank; Elder, Georgina; Kendrick, Denise

    2014-01-01

    Objective To explore views of service providers caring for injured people on: the extent to which services meet patients’ needs and their perspectives on factors contributing to any identified gaps in service provision. Design Qualitative study nested within a quantitative multicentre longitudinal study assessing longer term impact of unintentional injuries in working age adults. Sampling frame for service providers was based on patient-reported service use in the quantitative study, patient interviews and advice of previously injured lay research advisers. Service providers’ views were elicited through semistructured interviews. Data were analysed using thematic analysis. Setting Participants were recruited from a range of settings and services in acute hospital trusts in four study centres (Bristol, Leicester, Nottingham and Surrey) and surrounding areas. Participants 40 service providers from a range of disciplines. Results Service providers described two distinct models of trauma care: an ‘ideal’ model, informed by professional knowledge of the impact of injury and awareness of best models of care, and a ‘real’ model based on the realities of National Health Service (NHS) practice. Participants’ ‘ideal’ model was consistent with standards of high-quality effective trauma care and while there were examples of services meeting the ideal model, ‘real’ care could also be fragmented and inequitable with major gaps in provision. Service provider accounts provide evidence of comprehensive understanding of patients’ needs, awareness of best practice, compassion and research but reveal significant organisational and resource barriers limiting implementation of knowledge in practice. Conclusions Service providers envisage an ‘ideal’ model of trauma care which is timely, equitable, effective and holistic, but this can differ from the care currently provided. Their experiences provide many suggestions for service improvements to bridge the gap between ‘real’ and ‘ideal’ care. Using service provider views to inform service design and delivery could enhance the quality, patient experience and outcomes of care. PMID:25005598

  5. A comparison of administrative and physiologic predictive models in determining risk adjusted mortality rates in critically ill patients.

    PubMed

    Enfield, Kyle B; Schafer, Katherine; Zlupko, Mike; Herasevich, Vitaly; Novicoff, Wendy M; Gajic, Ognjen; Hoke, Tracey R; Truwit, Jonathon D

    2012-01-01

    Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients. We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission. We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8%) and 24.6% (95% CI 22.7%, 26.5%) (t-test p-value<0.001). The r(2) for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model. In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results suggest this finding can not be applied globally to patients admitted to intensive care units. As patients and providers increasingly use publicly reported information in making health care decisions and referrals, it is critical that the provided information be understood. Our results suggest that severity of illness may influence the mortality index in administrative models. We suggest that when interpreting "report cards" or metrics, health care providers determine how the risk adjustment was made and compares to other risk adjustment models.

  6. Regional crop yield forecasting: a probabilistic approach

    NASA Astrophysics Data System (ADS)

    de Wit, A.; van Diepen, K.; Boogaard, H.

    2009-04-01

    Information on the outlook on yield and production of crops over large regions is essential for government services dealing with import and export of food crops, for agencies with a role in food relief, for international organizations with a mandate in monitoring the world food production and trade, and for commodity traders. Process-based mechanistic crop models are an important tool for providing such information, because they can integrate the effect of crop management, weather and soil on crop growth. When properly integrated in a yield forecasting system, the aggregated model output can be used to predict crop yield and production at regional, national and continental scales. Nevertheless, given the scales at which these models operate, the results are subject to large uncertainties due to poorly known weather conditions and crop management. Current yield forecasting systems are generally deterministic in nature and provide no information about the uncertainty bounds on their output. To improve on this situation we present an ensemble-based approach where uncertainty bounds can be derived from the dispersion of results in the ensemble. The probabilistic information provided by this ensemble-based system can be used to quantify uncertainties (risk) on regional crop yield forecasts and can therefore be an important support to quantitative risk analysis in a decision making process.

  7. A proposal to conduct a Caribbean plate project involving the application of space technology to the study of Caribbean geology

    NASA Technical Reports Server (NTRS)

    Wadge, G. (Editor)

    1981-01-01

    The Caribbean plate project is designed to improve current understanding of geological resources and geological hazards within the Caribbean region. Models of mineral occurrence and genesis (including energy resources) on a regional scale, which contribute to nonrenewable resource investigations. Models of lithospheric stress and strain on a regional scale, which contribute to forecasting geological hazards such as earthquakes and major volcanic eruptions are developed. Geological information is synthesize, and research tools provided by space technology the study of the Earth's crust are used. The project was organized in a thematic fashion, to focus on specific geological aspects of the Caribbean plate which are considered to be key factors in developing the types of models described. The project adopts a synoptic perspective in seeking to characterize the three dimensional structure, composition, state of stress, and evolution of the entire Caribbean plate. Geological information derived from analysis of space acquired data is combined with information provided by conventional methods to obtain insight into the structure, composition, and evolution of the Earth's crust. In addition, very long baseline interferometry and laser ranging techniques, which are also based upon the use of space technology, obtain information concerning crustal motion that, in turn, provides insight into the distribution and localization of crustal stress.

  8. Simulated environmental transport distances of Lepeophtheirus salmonis in Loch Linnhe, Scotland, for informing aquaculture area management structures.

    PubMed

    Salama, N K G; Murray, A G; Rabe, B

    2016-04-01

    In the majority of salmon farming countries, production occurs in zones where practices are coordinated to manage disease agents such as Lepeophtheirus salmonis. To inform the structure of zones in specific systems, models have been developed accounting for parasite biology and system hydrodynamics. These models provide individual system farm relationships, and as such, it may be beneficial to produce more generalized principles for informing structures. Here, we use six different forcing scenarios to provide simulations from a previously described model of the Loch Linnhe system, Scotland, to assess the maximum dispersal distance of lice particles released from 12 sites transported over 19 day. Results indicate that the median distance travelled is 6.1 km from release site with <2.5% transported beyond 15 km, which occurs from particles originating from half of the release sites, with an absolute simulated distance of 36 km observed. This provides information suggesting that the disease management areas developed for infectious salmon anaemia control may also have properties appropriate for salmon lice management in Scottish coastal waters. Additionally, general numerical descriptors of the simulated relative lice abundance reduction with increased distance from release location are proposed. © 2015 Crown copyright. © 2015 John Wiley & Sons Ltd.

  9. Videomicroscopic extraction of specific information on cell proliferation and migration in vitro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debeir, Olivier; Megalizzi, Veronique; Warzee, Nadine

    2008-10-01

    In vitro cell imaging is a useful exploratory tool for cell behavior monitoring with a wide range of applications in cell biology and pharmacology. Combined with appropriate image analysis techniques, this approach has been shown to provide useful information on the detection and dynamic analysis of cell events. In this context, numerous efforts have been focused on cell migration analysis. In contrast, the cell division process has been the subject of fewer investigations. The present work focuses on this latter aspect and shows that, in complement to cell migration data, interesting information related to cell division can be extracted frommore » phase-contrast time-lapse image series, in particular cell division duration, which is not provided by standard cell assays using endpoint analyses. We illustrate our approach by analyzing the effects induced by two sigma-1 receptor ligands (haloperidol and 4-IBP) on the behavior of two glioma cell lines using two in vitro cell models, i.e., the low-density individual cell model and the high-density scratch wound model. This illustration also shows that the data provided by our approach are suggestive as to the mechanism of action of compounds, and are thus capable of informing the appropriate selection of further time-consuming and more expensive biological evaluations required to elucidate a mechanism.« less

  10. Text mining and its potential applications in systems biology.

    PubMed

    Ananiadou, Sophia; Kell, Douglas B; Tsujii, Jun-ichi

    2006-12-01

    With biomedical literature increasing at a rate of several thousand papers per week, it is impossible to keep abreast of all developments; therefore, automated means to manage the information overload are required. Text mining techniques, which involve the processes of information retrieval, information extraction and data mining, provide a means of solving this. By adding meaning to text, these techniques produce a more structured analysis of textual knowledge than simple word searches, and can provide powerful tools for the production and analysis of systems biology models.

  11. Mental workload prediction based on attentional resource allocation and information processing.

    PubMed

    Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin

    2015-01-01

    Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.

  12. Expanding resource theory and feminist-informed theory to explain intimate partner violence perpetration by court-ordered men.

    PubMed

    Basile, Kathleen C; Hall, Jeffrey E; Walters, Mikel L

    2013-07-01

    This study tested resource and feminist-informed theories to explain physical, sexual, psychological, and stalking intimate partner violence (IPV) perpetrated by court-mandated men. Data were obtained from 340 men arrested for physical assault of a partner before their court-ordered treatment. Using path analysis, findings provided partial support for each model. Ineffective arguing and substance-use problems were moderators of resources and perpetration. Dominance mediated early exposures and perpetration in the feminist-informed model. In both models, predictors of stalking were different than those for other types of perpetration. Future studies should replicate this research and determine the utility of combining models.

  13. geophylobuilder 1.0: an arcgis extension for creating 'geophylogenies'.

    PubMed

    Kidd, David M; Liu, Xianhua

    2008-01-01

    Evolution is inherently a spatiotemporal process; however, despite this, phylogenetic and geographical data and models remain largely isolated from one another. Geographical information systems provide a ready-made spatial modelling, analysis and dissemination environment within which phylogenetic models can be explicitly linked with their associated spatial data and subsequently integrated with other georeferenced data sets describing the biotic and abiotic environment. geophylobuilder 1.0 is an extension for the arcgis geographical information system that builds a 'geophylogenetic' data model from a phylogenetic tree and associated geographical data. Geophylogenetic database objects can subsequently be queried, spatially analysed and visualized in both 2D and 3D within a geographical information systems. © 2007 The Authors.

  14. Deriving user-informed climate information from climate model ensemble results

    NASA Astrophysics Data System (ADS)

    Huebener, Heike; Hoffmann, Peter; Keuler, Klaus; Pfeifer, Susanne; Ramthun, Hans; Spekat, Arne; Steger, Christian; Warrach-Sagi, Kirsten

    2017-07-01

    Communication between providers and users of climate model simulation results still needs to be improved. In the German regional climate modeling project ReKliEs-De a midterm user workshop was conducted to allow the intended users of the project results to assess the preliminary results and to streamline the final project results to their needs. The user feedback highlighted, in particular, the still considerable gap between climate research output and user-tailored input for climate impact research. Two major requests from the user community addressed the selection of sub-ensembles and some condensed, easy to understand information on the strengths and weaknesses of the climate models involved in the project.

  15. Using the Weighted Keyword Model to Improve Information Retrieval for Answering Biomedical Questions

    PubMed Central

    Yu, Hong; Cao, Yong-gang

    2009-01-01

    Physicians ask many complex questions during the patient encounter. Information retrieval systems that can provide immediate and relevant answers to these questions can be invaluable aids to the practice of evidence-based medicine. In this study, we first automatically identify topic keywords from ad hoc clinical questions with a Condition Random Field model that is trained over thousands of manually annotated clinical questions. We then report on a linear model that assigns query weights based on their automatically identified semantic roles: topic keywords, domain specific terms, and their synonyms. Our evaluation shows that this weighted keyword model improves information retrieval from the Text Retrieval Conference Genomics track data. PMID:21347188

  16. Using the weighted keyword model to improve information retrieval for answering biomedical questions.

    PubMed

    Yu, Hong; Cao, Yong-Gang

    2009-03-01

    Physicians ask many complex questions during the patient encounter. Information retrieval systems that can provide immediate and relevant answers to these questions can be invaluable aids to the practice of evidence-based medicine. In this study, we first automatically identify topic keywords from ad hoc clinical questions with a Condition Random Field model that is trained over thousands of manually annotated clinical questions. We then report on a linear model that assigns query weights based on their automatically identified semantic roles: topic keywords, domain specific terms, and their synonyms. Our evaluation shows that this weighted keyword model improves information retrieval from the Text Retrieval Conference Genomics track data.

  17. Observations to information

    NASA Astrophysics Data System (ADS)

    Cox, S. J.

    2013-12-01

    Observations provide the fundamental constraint on natural science interpretations. Earth science observations originate in many contexts, including in-situ field observations and monitoring, various modes of remote sensing and geophysics, sampling for ex-situ (laboratory) analysis, as well as numerical modelling and simulation which also provide estimates of parameter values. Most investigations require a combination of these, often sourced from multiple initiatives and archives, so data discovery and re-organization can be a significant project burden. The Observations and Measurements (O&M) information model was developed to provide a common vocabulary that can be applied to all these cases, and thus provide a basis for cross-initiative and cross-domain interoperability. O&M was designed in the context of the standards for geographic information from OGC and ISO. It provides a complementary viewpoint to the well-known feature (object oriented) and coverage (property field) views, but prioritizes the property determination process. Nevertheless, use of O&M implies the existence of well defined feature types. In disciplines such as geology and ecosystem sciences the primary complexity is in their model of the world, for which the description of each item requires access to diverse observation sets. On the other hand, geophysics and earth observations work with simpler underlying information items, but in larger quantities over multiple spatio-temporal dimensions, acquired using complex sensor systems. Multiple transformations between the three viewpoints are involved in the data flows in most investigations, from collection through analysis to information and story. The O&M model classifies observations: - from a provider viewpoint: in terms of the sensor or procedure involved; - from a consumer viewpoint: in terms of the property being reported, and the feature with which it is associated. These concerns carry different weights in different applications. Communities generating data using ships, satellites and aircraft habitually classify observations by the source platform and mission, as this implies a rich set of metadata to the cognoscenti. However, integrators are more likely to focus on the phenomenon being observed, together with the location of the features carrying it. In this context sensor information informs quality evaluation, as a secondary consideration following after data discovery. The observation model is specialized by constraining facets, such as observed property, sensor or procedure, to be taken from a specific set or vocabulary. Such vocabularies are typically developed on a project or community basis, but data fusion depends on them being widely accessible, and comparable with related vocabularies. Better still if they are transparently governed, trusted and stable enough to encourage re-use. Semantic web technologies support distribution of rigorously constructed vocabularies through standard interfaces, with standard mechanisms for asserting or inferring of proximity and other relationships. Interoperability of observation data in future is likely to depend on the development of a viable ecosystem of these secondary resources.

  18. Modeling Virus Coinfection to Inform Management of Maize Lethal Necrosis in Kenya.

    PubMed

    Hilker, Frank M; Allen, Linda J S; Bokil, Vrushali A; Briggs, Cheryl J; Feng, Zhilan; Garrett, Karen A; Gross, Louis J; Hamelin, Frédéric M; Jeger, Michael J; Manore, Carrie A; Power, Alison G; Redinbaugh, Margaret G; Rúa, Megan A; Cunniffe, Nik J

    2017-10-01

    Maize lethal necrosis (MLN) has emerged as a serious threat to food security in sub-Saharan Africa. MLN is caused by coinfection with two viruses, Maize chlorotic mottle virus and a potyvirus, often Sugarcane mosaic virus. To better understand the dynamics of MLN and to provide insight into disease management, we modeled the spread of the viruses causing MLN within and between growing seasons. The model allows for transmission via vectors, soil, and seed, as well as exogenous sources of infection. Following model parameterization, we predict how management affects disease prevalence and crop performance over multiple seasons. Resource-rich farmers with large holdings can achieve good control by combining clean seed and insect control. However, crop rotation is often required to effect full control. Resource-poor farmers with smaller holdings must rely on rotation and roguing, and achieve more limited control. For both types of farmer, unless management is synchronized over large areas, exogenous sources of infection can thwart control. As well as providing practical guidance, our modeling framework is potentially informative for other cropping systems in which coinfection has devastating effects. Our work also emphasizes how mathematical modeling can inform management of an emerging disease even when epidemiological information remains scanty. [Formula: see text] Copyright © 2017 The Author(s). This is an open access article distributed under the CC BY-NC-ND 4.0 International license .

  19. Regional input to joint European space weather service

    NASA Astrophysics Data System (ADS)

    Stanislawska, I.; Belehaki, A.; Jansen, F.; Heynderickx, D.; Lilensten, J.; Candidi, M.

    The basis for elaborating within COST 724 Action Developing the scientific basis for monitoring modeling and predicting Space Weather European space weather service is rich by many national and international activities which provide instruments and tools for global as well as regional monitoring and modeling COST 724 stimulates coordinates and supports Europe s goals of development and global cooperation by providing standards for timely and high quality information and knowledge in space weather Existing local capabilities are taken into account to develop synergies and avoid duplication The enhancement of environment monitoring networks and associated instruments technology yields mutual advantages for European service and regional services specialized for local users needs It structurally increases the integration of limited-area services generates a platform employing the same approach to each task differing mostly in input and output data In doing so it also provides complementary description of the environmental state within issued information A general scheme of regional services concept within COST 724 activity can be the processing chain from measurements trough algorithms to operational knowledge It provides the platform for interaction among the local end users who define what kind of information they need system providers who elaborate tools necessary to obtain required information and local service providers who do the actual processing of data and tailor it to specific user s needs Such initiative creates a unique possibility for small

  20. Continuous information flow fluctuations

    NASA Astrophysics Data System (ADS)

    Rosinberg, Martin Luc; Horowitz, Jordan M.

    2016-10-01

    Information plays a pivotal role in the thermodynamics of nonequilibrium processes with feedback. However, much remains to be learned about the nature of information fluctuations in small-scale devices and their relation with fluctuations in other thermodynamics quantities, like heat and work. Here we derive a series of fluctuation theorems for information flow and partial entropy production in a Brownian particle model of feedback cooling and extend them to arbitrary driven diffusion processes. We then analyze the long-time behavior of the feedback-cooling model in detail. Our results provide insights into the structure and origin of large deviations of information and thermodynamic quantities in autonomous Maxwell's demons.

  1. Modeling of information diffusion in Twitter-like social networks under information overload.

    PubMed

    Li, Pei; Li, Wei; Wang, Hui; Zhang, Xin

    2014-01-01

    Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations.

  2. Modeling of Information Diffusion in Twitter-Like Social Networks under Information Overload

    PubMed Central

    Li, Wei

    2014-01-01

    Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations. PMID:24795541

  3. Construction of a multimodal CT-video chest model

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2014-03-01

    Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient's 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient's 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video's color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.

  4. Exploration Medical System Trade Study Tools Overview

    NASA Technical Reports Server (NTRS)

    Mindock, J.; Myers, J.; Latorella, K.; Cerro, J.; Hanson, A.; Hailey, M.; Middour, C.

    2018-01-01

    ExMC is creating an ecosystem of tools to enable well-informed medical system trade studies. The suite of tools address important system implementation aspects of the space medical capabilities trade space and are being built using knowledge from the medical community regarding the unique aspects of space flight. Two integrating models, a systems engineering model and a medical risk analysis model, tie the tools together to produce an integrated assessment of the medical system and its ability to achieve medical system target requirements. This presentation will provide an overview of the various tools that are a part of the tool ecosystem. Initially, the presentation's focus will address the tools that supply the foundational information to the ecosystem. Specifically, the talk will describe how information that describes how medicine will be practiced is captured and categorized for efficient utilization in the tool suite. For example, the talk will include capturing what conditions will be planned for in-mission treatment, planned medical activities (e.g., periodic physical exam), required medical capabilities (e.g., provide imaging), and options to implement the capabilities (e.g., an ultrasound device). Database storage and configuration management will also be discussed. The presentation will include an overview of how these information tools will be tied to parameters in a Systems Modeling Language (SysML) model, allowing traceability to system behavioral, structural, and requirements content. The discussion will also describe an HRP-led enhanced risk assessment model developed to provide quantitative insight into each capability's contribution to mission success. Key outputs from these various tools, to be shared with the space medical and exploration mission development communities, will be assessments of medical system implementation option satisfaction of requirements and per-capability contributions toward achieving requirements.

  5. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  6. A three-talk model for shared decision making: multistage consultation process

    PubMed Central

    Durand, Marie Anne; Song, Julia; Aarts, Johanna; Barr, Paul J; Berger, Zackary; Cochran, Nan; Frosch, Dominick; Galasiński, Dariusz; Gulbrandsen, Pål; Han, Paul K J; Härter, Martin; Kinnersley, Paul; Lloyd, Amy; Mishra, Manish; Perestelo-Perez, Lilisbeth; Scholl, Isabelle; Tomori, Kounosuke; Trevena, Lyndal; Witteman, Holly O; Van der Weijden, Trudy

    2017-01-01

    Objectives To revise an existing three-talk model for learning how to achieve shared decision making, and to consult with relevant stakeholders to update and obtain wider engagement. Design Multistage consultation process. Setting Key informant group, communities of interest, and survey of clinical specialties. Participants 19 key informants, 153 member responses from multiple communities of interest, and 316 responses to an online survey from medically qualified clinicians from six specialties. Results After extended consultation over three iterations, we revised the three-talk model by making changes to one talk category, adding the need to elicit patient goals, providing a clear set of tasks for each talk category, and adding suggested scripts to illustrate each step. A new three-talk model of shared decision making is proposed, based on “team talk,” “option talk,” and “decision talk,” to depict a process of collaboration and deliberation. Team talk places emphasis on the need to provide support to patients when they are made aware of choices, and to elicit their goals as a means of guiding decision making processes. Option talk refers to the task of comparing alternatives, using risk communication principles. Decision talk refers to the task of arriving at decisions that reflect the informed preferences of patients, guided by the experience and expertise of health professionals. Conclusions The revised three-talk model of shared decision making depicts conversational steps, initiated by providing support when introducing options, followed by strategies to compare and discuss trade-offs, before deliberation based on informed preferences. PMID:29109079

  7. Exploring nursing e-learning systems success based on information system success model.

    PubMed

    Chang, Hui-Chuan; Liu, Chung-Feng; Hwang, Hsin-Ginn

    2011-12-01

    E-learning is thought of as an innovative approach to enhance nurses' care service knowledge. Extensive research has provided rich information toward system development, courses design, and nurses' satisfaction with an e-learning system. However, a comprehensive view in understanding nursing e-learning system success is an important but less focused-on topic. The purpose of this research was to explore net benefits of nursing e-learning systems based on the updated DeLone and McLean's Information System Success Model. The study used a self-administered questionnaire to collected 208 valid nurses' responses from 21 of Taiwan's medium- and large-scale hospitals that have implemented nursing e-learning systems. The result confirms that the model is sufficient to explore the nurses' use of e-learning systems in terms of intention to use, user satisfaction, and net benefits. However, while the three exogenous quality factors (system quality, information quality, and service quality) were all found to be critical factors affecting user satisfaction, only information quality showed a direct effect on the intention to use. This study provides useful insights for evaluating nursing e-learning system qualities as well as an understanding of nurses' intentions and satisfaction related to performance benefits.

  8. Virtual working systems to support R&D groups

    NASA Astrophysics Data System (ADS)

    Dew, Peter M.; Leigh, Christine; Drew, Richard S.; Morris, David; Curson, Jayne

    1995-03-01

    The paper reports on the progress at Leeds University to build a Virtual Science Park (VSP) to enhance the University's ability to interact with industry, grow its applied research and workplace learning activities. The VSP exploits the advances in real time collaborative computing and networking to provide an environment that meets the objectives of physically based science parks without the need for the organizations to relocate. It provides an integrated set of services (e.g. virtual consultancy, workbased learning) built around a structured person- centered information model. This model supports the integration of tools for: (a) navigating around the information space; (b) browsing information stored within the VSP database; (c) communicating through a variety of Person-to-Person collaborative tools; and (d) the ability to the information stored in the VSP including the relationships to other information that support the underlying model. The paper gives an overview of a generic virtual working system based on X.500 directory services and the World-Wide Web that can be used to support the Virtual Science Park. Finally the paper discusses some of the research issues that need to be addressed to fully realize a Virtual Science Park.

  9. EnviroNET: An on-line environment data base for LDEF data

    NASA Technical Reports Server (NTRS)

    Lauriente, Michael

    1992-01-01

    EnviroNET is an on-line, free form data base intended to provide a centralized depository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user friendly, menu driven format on networks that are connected globally and is available twenty-four hours a day, every day. The information updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government facilities, industry, universities, and ESA. The models accept parameter input from the user and calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic field, and ionosphere. A user friendly informative interface is standard for all the models with a pop-up window, help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solution for computational intense graphical applications to do 'What if' scenarios. A proposed plan for developing a repository of LDEF information for a user group concludes the presentation.

  10. Extracting duration information in a picture category decoding task using hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg

    2016-04-01

    Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.

  11. Mixed-effects Gaussian process functional regression models with application to dose-response curve prediction.

    PubMed

    Shi, J Q; Wang, B; Will, E J; West, R M

    2012-11-20

    We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.

  12. A geographic data model for representing ground water systems.

    PubMed

    Strassberg, Gil; Maidment, David R; Jones, Norm L

    2007-01-01

    The Arc Hydro ground water data model is a geographic data model for representing spatial and temporal ground water information within a geographic information system (GIS). The data model is a standardized representation of ground water systems within a spatial database that provides a public domain template for GIS users to store, document, and analyze commonly used spatial and temporal ground water data sets. This paper describes the data model framework, a simplified version of the complete ground water data model that includes two-dimensional and three-dimensional (3D) object classes for representing aquifers, wells, and borehole data, and the 3D geospatial context in which these data exist. The framework data model also includes tabular objects for representing temporal information such as water levels and water quality samples that are related with spatial features.

  13. Validating archetypes for the Multiple Sclerosis Functional Composite.

    PubMed

    Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin

    2014-08-03

    Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model.

  14. Validating archetypes for the Multiple Sclerosis Functional Composite

    PubMed Central

    2014-01-01

    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081

  15. Toward a Concept of Operations for Aviation Weather Information Implementation in the Evolving National Airspace System

    NASA Technical Reports Server (NTRS)

    McAdaragh, Raymon M.

    2002-01-01

    The capacity of the National Airspace System is being stressed due to the limits of current technologies. Because of this, the FAA and NASA are working to develop new technologies to increase the system's capacity which enhancing safety. Adverse weather has been determined to be a major factor in aircraft accidents and fatalities and the FAA and NASA have developed programs to improve aviation weather information technologies and communications for system users The Aviation Weather Information Element of the Weather Accident Prevention Project of NASA's Aviation Safety Program is currently working to develop these technologies in coordination with the FAA and industry. This paper sets forth a theoretical approach to implement these new technologies while addressing the National Airspace System (NAS) as an evolving system with Weather Information as one of its subSystems. With this approach in place, system users will be able to acquire the type of weather information that is needed based upon the type of decision-making situation and condition that is encountered. The theoretical approach addressed in this paper takes the form of a model for weather information implementation. This model addresses the use of weather information in three decision-making situations, based upon the system user's operational perspective. The model also addresses two decision-making conditions, which are based upon the need for collaboration due to the level of support offered by the weather information provided by each new product or technology. The model is proposed for use in weather information implementation in order to provide a systems approach to the NAS. Enhancements to the NAS collaborative decision-making capabilities are also suggested.

  16. An Integrative Behavioral Model of Information Security Policy Compliance

    PubMed Central

    Kim, Sang Hoon; Yang, Kyung Hoon; Park, Sunyoung

    2014-01-01

    The authors found the behavioral factors that influence the organization members' compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members' attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1) the study is expected to play a role of the baseline for future research about organization members' compliance with the information security policy, (2) the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3) the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training programs suppressing members' neutralization intention to violate information security policy should be emphasized. PMID:24971373

  17. An integrative behavioral model of information security policy compliance.

    PubMed

    Kim, Sang Hoon; Yang, Kyung Hoon; Park, Sunyoung

    2014-01-01

    The authors found the behavioral factors that influence the organization members' compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members' attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1) the study is expected to play a role of the baseline for future research about organization members' compliance with the information security policy, (2) the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3) the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training programs suppressing members' neutralization intention to violate information security policy should be emphasized.

  18. Professional Journals as a Source of PCK for Teaching Nature of Science: An Examination of Articles Published in The Science Teacher (TST) (an NSTA Journal), 1995-2010

    NASA Astrophysics Data System (ADS)

    Aydın, Sevgi; Demirdöğen, Betül; Muslu, Nilay; Hanuscin, Deborah L.

    2013-10-01

    A number of science education policy documents recommend that students develop an understanding of the enterprise of science and the nature of science (NOS). Despite this emphasis, there is still a gap between policy and practice. Teacher professional literature provides one potential venue for bridging this gap, by providing “activities that work” (Appleton in elementary science teacher education: International perspectives on contemporary issues and practice. Lawrence Erlbaum Associates, Mahwah, NJ, 2006) that can scaffold teachers’ developing pedagogical content knowledge (PCK) for teaching NOS. We analyzed articles published in the NSTA journal The Science Teacher (1995-2010) in terms of the degree to which they provide appropriate model activities and specific information that can support the development of teachers’ PCK for teaching NOS. Our analysis revealed a diversity of NOS aspects addressed by the authors and a wide range of variation in the percent of articles focused on each aspect. Additionally, we found that few articles provided robust information related to all the component knowledge bases of PCK for NOS. In particular, within the extant practitioner literature, there are few models for teaching the aspects of NOS, such as the function and nature of scientific theory. Furthermore, though articles provided information relevant to informing teachers’ knowledge of instructional strategies for NOS, relevant information to inform teachers’ knowledge of assessment in this regard was lacking. We provide recommendations for ways in which the practitioner literature may support teachers’ teaching of NOS through more robust attention to the types of knowledge research indicates are needed in order to teaching NOS effectively.

  19. Evaluating and Mitigating the Impact of Complexity in Software Models

    DTIC Science & Technology

    2015-12-01

    Internal use:* Permission to reproduce this material and to prepare derivative works from this material for internal use is granted, provided the...introduction) provides our motivation to study complexity and the essential re- search questions that we address in this effort. Some background information... provides the reader with a basis for the work and related areas explored. Section 2 (The Impact of Complexity) discusses the impact of model-based

  20. Developmental and Life-Stage Physiologically-Based Pharmacokinetic (PBPK) Models in Humans and Animal Models.

    EPA Science Inventory

    PBPK models provide a computational framework for incorporating pertinent physiological and biochemical information to estimate in vivo levels of xenobiotics in biological tissues. In general, PBPK models are used to correlate exposures to target tissue levels of chemicals and th...

  1. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  2. Groundwater Data Package for the 2004 Composite Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorne, Paul D.

    2004-08-11

    This report presents data and information that supports the groundwater module. The conceptual model of groundwater flow and transport at the Hanford Site is described and specific information applied in the numerical implementation module is provided.

  3. An analytical approach to customer requirement information processing

    NASA Astrophysics Data System (ADS)

    Zhou, Zude; Xiao, Zheng; Liu, Quan; Ai, Qingsong

    2013-11-01

    'Customer requirements' (CRs) management is a key component of customer relationship management (CRM). By processing customer-focused information, CRs management plays an important role in enterprise systems (ESs). Although two main CRs analysis methods, quality function deployment (QFD) and Kano model, have been applied to many fields by many enterprises in the past several decades, the limitations such as complex processes and operations make them unsuitable for online businesses among small- and medium-sized enterprises (SMEs). Currently, most SMEs do not have the resources to implement QFD or Kano model. In this article, we propose a method named customer requirement information (CRI), which provides a simpler and easier way for SMEs to run CRs analysis. The proposed method analyses CRs from the perspective of information and applies mathematical methods to the analysis process. A detailed description of CRI's acquisition, classification and processing is provided.

  4. Construction and Resource Utilization Explorer (CRUX): Implementing Instrument Suite Data Fusion to Characterize Regolith Hydrogen Resources

    NASA Technical Reports Server (NTRS)

    Haldemann, Albert F. C.; Johnson, Jerome B.; Elphic, Richard C.; Boynton, William V.; Wetzel, John

    2006-01-01

    CRUX is a modular suite of geophysical and borehole instruments combined with display and decision support system (MapperDSS) tools to characterize regolith resources, surface conditions, and geotechnical properties. CRUX is a NASA-funded Technology Maturation Program effort to provide enabling technology for Lunar and Planetary Surface Operations (LPSO). The MapperDSS uses data fusion methods with CRUX instruments, and other available data and models, to provide regolith properties information needed for LPSO that cannot be determined otherwise. We demonstrate the data fusion method by showing how it might be applied to characterize the distribution and form of hydrogen using a selection of CRUX instruments: Borehole Neutron Probe and Thermal Evolved Gas Analyzer data as a function of depth help interpret Surface Neutron Probe data to generate 3D information. Secondary information from other instruments along with physical models improves the hydrogen distribution characterization, enabling information products for operational decision-making.

  5. EnviroNET: On-line information for LDEF

    NASA Technical Reports Server (NTRS)

    Lauriente, Michael

    1993-01-01

    EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.

  6. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    NASA Technical Reports Server (NTRS)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  7. Customer-Provider Strategic Alignment: A Maturity Model

    NASA Astrophysics Data System (ADS)

    Luftman, Jerry; Brown, Carol V.; Balaji, S.

    This chapter presents a new model for assessing the maturity of a ­customer-provider relationship from a collaborative service delivery perspective: the Customer-Provider Strategic Alignment Maturity (CPSAM) Model. This model builds on recent research for effectively managing the customer-provider relationship in IT service outsourcing contexts and a validated model for assessing alignment across internal IT service units and their business customers within the same organization. After reviewing relevant literature by service science and information systems researchers, the six overarching components of the maturity model are presented: value measurements, governance, partnership, communications, human resources and skills, and scope and architecture. A key assumption of the model is that all of the components need be addressed to assess and improve customer-provider alignment. Examples of specific metrics for measuring the maturity level of each component over the five levels of maturity are also presented.

  8. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  9. Understanding leachate flow in municipal solid waste landfills by combining time-lapse ERT and subsurface flow modelling - Part II: Constraint methodology of hydrodynamic models.

    PubMed

    Audebert, M; Oxarango, L; Duquennoi, C; Touze-Foltz, N; Forquet, N; Clément, R

    2016-09-01

    Leachate recirculation is a key process in the operation of municipal solid waste landfills as bioreactors. To ensure optimal water content distribution, bioreactor operators need tools to design leachate injection systems. Prediction of leachate flow by subsurface flow modelling could provide useful information for the design of such systems. However, hydrodynamic models require additional data to constrain them and to assess hydrodynamic parameters. Electrical resistivity tomography (ERT) is a suitable method to study leachate infiltration at the landfill scale. It can provide spatially distributed information which is useful for constraining hydrodynamic models. However, this geophysical method does not allow ERT users to directly measure water content in waste. The MICS (multiple inversions and clustering strategy) methodology was proposed to delineate the infiltration area precisely during time-lapse ERT survey in order to avoid the use of empirical petrophysical relationships, which are not adapted to a heterogeneous medium such as waste. The infiltration shapes and hydrodynamic information extracted with MICS were used to constrain hydrodynamic models in assessing parameters. The constraint methodology developed in this paper was tested on two hydrodynamic models: an equilibrium model where, flow within the waste medium is estimated using a single continuum approach and a non-equilibrium model where flow is estimated using a dual continuum approach. The latter represents leachate flows into fractures. Finally, this methodology provides insight to identify the advantages and limitations of hydrodynamic models. Furthermore, we suggest an explanation for the large volume detected by MICS when a small volume of leachate is injected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    NASA Astrophysics Data System (ADS)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  11. Information-Theoretic Perspectives on Geophysical Models

    NASA Astrophysics Data System (ADS)

    Nearing, Grey

    2016-04-01

    To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar metrics) (Csiszár, 1972). Fundamentally, models can only translate existing information - they cannot create information. That is, all of the information about any future (or otherwise unobserved event) is contained in the initial and boundary conditions of whatever model we will use to predict that phenomena (Gong et al., 2013). A model simply tells us how to process the available information in a way that is as close to isomorphic with how the system itself processes information. As such, models can only lose or corrupt information because at best a model can only perfectly extract all information contained in its input data; this is a theorem called the Data Processing Inequality (Cover and Thomas, 1991), and this perspective represents a purely ontological treatment of information in models. In practice, however, models provide information to scientists about how to translate information, and in this epistemic sense, models can provide positive quantities of information. During engineering-type efforts, where our goal is fundamentally to make predictions, we would measure the (possibly positive) net epistemic information from some hypothesized model relative to some uninformative prior, or relative to some competing model(s), to measure how much information we gain by running the model (Nearing and Gupta, 2015). True science-focused efforts, however, where the intent is learning rather than prediction, cannot rely on this type of comparative hypothesis testing. We therefore encourage scientists to take the first perspective outlined above and to attempt to measure the ontological information that is lost by their models, rather than the epistemological information that is gained from their models. This represents a radical departure from how scientists usually approach the problem of model evaluation. It turns out that it is possible to approximate the latter objective in practice. We are aware of no existing efforts to this effect in either the philosophy or practice of science (except by Gong et al., 2013, whose fundamental insight is the basis for this talk), and here I offer two examples of practical methods that scientists might use to approximately measure ontological information. I place this practical discussion in the context of several recent and high-profile experiments that have found that simple out-of-sample statistical models typically (vastly) outperform our most sophisticated terrestrial hydrology models. I offer some perspective on several open questions about how to use these findings to improve our models and understanding of these systems. Cartwright, N. (1983) How the Laws of Physics Lie. New York, NY: Cambridge Univ Press. Clark, M. P., Kavetski, D. and Fenicia, F. (2011) 'Pursuing the method of multiple working hypotheses for hydrological modeling', Water Resources Research, 47(9). Cover, T. M. and Thomas, J. A. (1991) Elements of Information Theory. New York, NY: Wiley-Interscience. Cox, R. T. (1946) 'Probability, frequency and reasonable expectation', American Journal of Physics, 14, pp. 1-13. Csiszár, I. (1972) 'A Class of Measures of Informativity of Observation Channels', Periodica Mathematica Hungarica, 2(1), pp. 191-213. Davies, P. C. W. (1990) 'Why is the physical world so comprehensible', Complexity, entropy and the physics of information, pp. 61-70. Gong, W., Gupta, H. V., Yang, D., Sricharan, K. and Hero, A. O. (2013) 'Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach', Water Resources Research, 49(4), pp. 2253-2273. Jaynes, E. T. (2003) Probability Theory: The Logic of Science. New York, NY: Cambridge University Press. Nearing, G. S. and Gupta, H. V. (2015) 'The quantity and quality of information in hydrologic models', Water Resources Research, 51(1), pp. 524-538. Popper, K. R. (2002) The Logic of Scientific Discovery. New York: Routledge. Van Horn, K. S. (2003) 'Constructing a logic of plausible inference: a guide to cox's theorem', International Journal of Approximate Reasoning, 34(1), pp. 3-24.

  12. Joint System of the National Hydrometeorology for disaster prevention

    NASA Astrophysics Data System (ADS)

    Lim, J.; Cho, K.; Lee, Y. S.; Jung, H. S.; Yoo, H. D.; Ryu, D.; Kwon, J.

    2014-12-01

    Hydrological disaster relief expenditure accounts for as much as 70 percent of total expenditure of disasters occurring in Korea. Since the response to and recovery of disasters are normally based on previous experiences, there have been limitations when dealing with ever-increasing localized heavy rainfall with short range in the era of climate change. Therefore, it became necessary to establish a system that can respond to a disaster in advance through the analysis and prediction of hydrometeorological information. Because a wide range of big data is essential, it cannot be done by a single agency only. That is why the three hydrometeorology-related agencies cooperated to establish a pilot (trial) system at Soemjingang basin in 2013. The three governmental agencies include the National Emergency Management Agency (NEMA) in charge of disaster prevention and public safety, the National Geographic Information Institute (NGII under Ministry of Land, Infrastructure and Transport) in charge of geographical data, and the Korea Meteorological Administration (KMA) in charge of weather information. This pilot system was designed to be able to respond to disasters in advance through providing a damage prediction information for flash flood to public officers for safety part using high resolution precipitation prediction data provided by the KMA and high precision geographic data by NGII. To produce precipitation prediction data with high resolution, the KMA conducted downscaling from 25km×25km global model to 3km×3km local model and is running the local model twice a day. To maximize the utility of weather prediction information, the KMA is providing the prediction information for 7 days with 1 hour interval at Soemjingang basin to monitor and predict not only flood but also drought. As no prediction is complete without a description of its uncertainty, it is planned to continuously develop the skills to improve the uncertainty of the prediction on weather and its impact. I will introduce more the flow chart to produce and provide the weather prediction information in AGU fall meeting.

  13. The Effects of Climate Model Similarity on Local, Risk-Based Adaptation Planning

    NASA Astrophysics Data System (ADS)

    Steinschneider, S.; Brown, C. M.

    2014-12-01

    The climate science community has recently proposed techniques to develop probabilistic projections of climate change from ensemble climate model output. These methods provide a means to incorporate the formal concept of risk, i.e., the product of impact and probability, into long-term planning assessments for local systems under climate change. However, approaches for pdf development often assume that different climate models provide independent information for the estimation of probabilities, despite model similarities that stem from a common genealogy. Here we utilize an ensemble of projections from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to develop probabilistic climate information, with and without an accounting of inter-model correlations, and use it to estimate climate-related risks to a local water utility in Colorado, U.S. We show that the tail risk of extreme climate changes in both mean precipitation and temperature is underestimated if model correlations are ignored. When coupled with impact models of the hydrology and infrastructure of the water utility, the underestimation of extreme climate changes substantially alters the quantification of risk for water supply shortages by mid-century. We argue that progress in climate change adaptation for local systems requires the recognition that there is less information in multi-model climate ensembles than previously thought. Importantly, adaptation decisions cannot be limited to the spread in one generation of climate models.

  14. Knowledge sifters in MDA technologies

    NASA Astrophysics Data System (ADS)

    Kravchenko, Yuri; Kursitys, Ilona; Bova, Victoria

    2018-05-01

    The article considers a new approach to efficient management of information processes on the basis of object models. With the help of special design tools, a generic and application-independent application model is created, and then the program is implemented in a specific development environment. At the same time, the development process is completely based on a model that must contain all the information necessary for programming. The presence of a detailed model provides the automatic creation of typical parts of the application, the development of which is amenable to automation.

  15. How to measure technology assessment: an introduction.

    PubMed

    Hasman, Arie

    2014-01-01

    This contribution introduces the Technology Acceptance model. Since information systems are still underutilized, application of models of user acceptance can provide important clues about what can be done to increase system usage.

  16. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  17. Markup of temporal information in electronic health records.

    PubMed

    Hyun, Sookyung; Bakken, Suzanne; Johnson, Stephen B

    2006-01-01

    Temporal information plays a critical role in the understanding of clinical narrative (i.e., free text). We developed a representation for marking up temporal information in a narrative, consisting of five elements: 1) reference point, 2) direction, 3) number, 4) time unit, and 5) pattern. We identified 254 temporal expressions from 50 discharge summaries and represented them using our scheme. The overall inter-rater reliability among raters applying the representation model was 75 percent agreement. The model can contribute to temporal reasoning in computer systems for decision support, data mining, and process and outcomes analyses by providing structured temporal information.

  18. Thermal modelling using discrete vasculature for thermal therapy: a review

    PubMed Central

    Kok, H.P.; Gellermann, J.; van den Berg, C.A.T.; Stauffer, P.R.; Hand, J.W.; Crezee, J.

    2013-01-01

    Reliable temperature information during clinical hyperthermia and thermal ablation is essential for adequate treatment control, but conventional temperature measurements do not provide 3D temperature information. Treatment planning is a very useful tool to improve treatment quality and substantial progress has been made over the last decade. Thermal modelling is a very important and challenging aspect of hyperthermia treatment planning. Various thermal models have been developed for this purpose, with varying complexity. Since blood perfusion is such an important factor in thermal redistribution of energy in in vivo tissue, thermal simulations are most accurately performed by modelling discrete vasculature. This review describes the progress in thermal modelling with discrete vasculature for the purpose of hyperthermia treatment planning and thermal ablation. There has been significant progress in thermal modelling with discrete vasculature. Recent developments have made real-time simulations possible, which can provide feedback during treatment for improved therapy. Future clinical application of thermal modelling with discrete vasculature in hyperthermia treatment planning is expected to further improve treatment quality. PMID:23738700

  19. Operational atmospheric modeling system CARIS for effective emergency response associated with hazardous chemical releases in Korea.

    PubMed

    Kim, Cheol-Hee; Park, Jin-Ho; Park, Cheol-Jin; Na, Jin-Gyun

    2004-03-01

    The Chemical Accidents Response Information System (CARIS) was developed at the Center for Chemical Safety Management in South Korea in order to track and predict the dispersion of hazardous chemicals in the case of an accident or terrorist attack involving chemical companies. The main objective of CARIS is to facilitate an efficient emergency response to hazardous chemical accidents by rapidly providing key information in the decision-making process. In particular, the atmospheric modeling system implemented in CARIS, which is composed of a real-time numerical weather forecasting model and an air pollution dispersion model, can be used as a tool to forecast concentrations and to provide a wide range of assessments associated with various hazardous chemicals in real time. This article introduces the components of CARIS and describes its operational modeling system. Some examples of the operational modeling system and its use for emergency preparedness are presented and discussed. Finally, this article evaluates the current numerical weather prediction model for Korea.

  20. Modeling Human Exposure to Indoor Contaminants: External Source to Body Tissues.

    PubMed

    Webster, Eva M; Qian, Hua; Mackay, Donald; Christensen, Rebecca D; Tietjen, Britta; Zaleski, Rosemary

    2016-08-16

    Information on human indoor exposure is necessary to assess the potential risk to individuals from many chemicals of interest. Dynamic indoor and human physicologically based pharmacokinetic (PBPK) models of the distribution of nonionizing, organic chemical concentrations in indoor environments resulting in delivered tissue doses are developed, described and tested. The Indoor model successfully reproduced independently measured, reported time-dependent air concentrations of chloroform released during showering and of 2-butyoxyethanol following use of a volatile surface cleaner. The Indoor model predictions were also comparable to those from a higher tier consumer model (ConsExpo 4.1) for the surface cleaner scenario. The PBPK model successful reproduced observed chloroform exhaled air concentrations resulting from an inhalation exposure. Fugacity based modeling provided a seamless description of the partitioning, fluxes, accumulation and release of the chemical in indoor media and tissues of the exposed subject. This has the potential to assist in health risk assessments, provided that appropriate physical/chemical property, usage characteristics, and toxicological information are available.

  1. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  2. Design and Establishment of Quality Model of Fundamental Geographic Information Database

    NASA Astrophysics Data System (ADS)

    Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.

    2018-04-01

    In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.

  3. Cloud cover estimation: Use of GOES imagery in development of cloud cover data base for insolation assessment

    NASA Technical Reports Server (NTRS)

    Huning, J. R.; Logan, T. L.; Smith, J. H.

    1982-01-01

    The potential of using digital satellite data to establish a cloud cover data base for the United States, one that would provide detailed information on the temporal and spatial variability of cloud development are studied. Key elements include: (1) interfacing GOES data from the University of Wisconsin Meteorological Data Facility with the Jet Propulsion Laboratory's VICAR image processing system and IBIS geographic information system; (2) creation of a registered multitemporal GOES data base; (3) development of a simple normalization model to compensate for sun angle; (4) creation of a variable size georeference grid that provides detailed cloud information in selected areas and summarized information in other areas; and (5) development of a cloud/shadow model which details the percentage of each grid cell that is cloud and shadow covered, and the percentage of cloud or shadow opacity. In addition, comparison of model calculations of insolation with measured values at selected test sites was accomplished, as well as development of preliminary requirements for a large scale data base of cloud cover statistics.

  4. Foundations for context-aware information retrieval for proactive decision support

    NASA Astrophysics Data System (ADS)

    Mittu, Ranjeev; Lin, Jessica; Li, Qingzhe; Gao, Yifeng; Rangwala, Huzefa; Shargo, Peter; Robinson, Joshua; Rose, Carolyn; Tunison, Paul; Turek, Matt; Thomas, Stephen; Hanselman, Phil

    2016-05-01

    Intelligence analysts and military decision makers are faced with an onslaught of information. From the now ubiquitous presence of intelligence, surveillance, and reconnaissance (ISR) platforms providing large volumes of sensor data, to vast amounts of open source data in the form of news reports, blog postings, or social media postings, the amount of information available to a modern decision maker is staggering. Whether tasked with leading a military campaign or providing support for a humanitarian mission, being able to make sense of all the information available is a challenge. Due to the volume and velocity of this data, automated tools are required to help support reasoned, human decisions. In this paper we describe several automated techniques that are targeted at supporting decision making. Our approaches include modeling the kinematics of moving targets as motifs; developing normalcy models and detecting anomalies in kinematic data; automatically classifying the roles of users in social media; and modeling geo-spatial regions based on the behavior that takes place in them. These techniques cover a wide-range of potential decision maker needs.

  5. Forecasting runout of rock and debris avalanches

    USGS Publications Warehouse

    Iverson, Richard M.; Evans, S.G.; Mugnozza, G.S.; Strom, A.; Hermanns, R.L.

    2006-01-01

    Physically based mathematical models and statistically based empirical equations each may provide useful means of forecasting runout of rock and debris avalanches. This paper compares the foundations, strengths, and limitations of a physically based model and a statistically based forecasting method, both of which were developed to predict runout across three-dimensional topography. The chief advantage of the physically based model results from its ties to physical conservation laws and well-tested axioms of soil and rock mechanics, such as the Coulomb friction rule and effective-stress principle. The output of this model provides detailed information about the dynamics of avalanche runout, at the expense of high demands for accurate input data, numerical computation, and experimental testing. In comparison, the statistical method requires relatively modest computation and no input data except identification of prospective avalanche source areas and a range of postulated avalanche volumes. Like the physically based model, the statistical method yields maps of predicted runout, but it provides no information on runout dynamics. Although the two methods differ significantly in their structure and objectives, insights gained from one method can aid refinement of the other.

  6. Modeling Off-Nominal Behavior in SysML

    NASA Technical Reports Server (NTRS)

    Day, John; Donahue, Kenny; Ingham, Mitch; Kadesch, Alex; Kennedy, Kit; Post, Ethan

    2012-01-01

    Fault Management is an essential part of the system engineering process that is limited in its effectiveness by the ad hoc nature of the applied approaches and methods. Providing a rigorous way to develop and describe off-nominal behavior is a necessary step in the improvement of fault management, and as a result, will enable safe, reliable and available systems even as system complexity increases... The basic concepts described in this paper provide a foundation to build a larger set of necessary concepts and relationships for precise modeling of off-nominal behavior, and a basis for incorporating these ideas into the overall systems engineering process.. The simple FMEA example provided applies the modeling patterns we have developed and illustrates how the information in the model can be used to reason about the system and derive typical fault management artifacts.. A key insight from the FMEA work was the utility of defining failure modes as the "inverse of intent", and deriving this from the behavior models.. Additional work is planned to extend these ideas and capabilities to other types of relevant information and additional products.

  7. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  8. Uncertainty

    USGS Publications Warehouse

    Hunt, Randall J.

    2012-01-01

    Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.

  9. Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method

    NASA Astrophysics Data System (ADS)

    Nugraha, A. L.; Awaluddin, M.; Sasmito, B.

    2018-02-01

    One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.

  10. Cost model for biobanks.

    PubMed

    Gonzalez-Sanchez, M Beatriz; Lopez-Valeiras, Ernesto; Morente, Manuel M; Fernández Lago, Orlando

    2013-10-01

    Current economic conditions and budget constraints in publicly funded biomedical research have brought about a renewed interest in analyzing the cost and economic viability of research infrastructures. However, there are no proposals for specific cost accounting models for these types of organizations in the international scientific literature. The aim of this paper is to present the basis of a cost analysis model useful for any biobank regardless of the human biological samples that it stores for biomedical research. The development of a unique cost model for biobanks can be a complicated task due to the diversity of the biological samples they store. Different types of samples (DNA, tumor tissues, blood, serum, etc.) require different production processes. Nonetheless, the common basic steps of the production process can be identified. Thus, the costs incurred in each step can be analyzed in detail to provide cost information. Six stages and four cost objects were obtained by taking the production processes of biobanks belonging to the Spanish National Biobank Network as a starting point. Templates and examples are provided to help managers to identify and classify the costs involved in their own biobanks to implement the model. The application of this methodology will provide accurate information on cost objects, along with useful information to give an economic value to the stored samples, to analyze the efficiency of the production process and to evaluate the viability of some sample collections.

  11. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  12. Habitat Suitability Index Models: Veery

    USGS Publications Warehouse

    Sousa, Patrick J.

    1982-01-01

    Habitat preferences and species characteristics of the veery (Catharus fuscesens) are described in this publication. It is one of a series of Habitat Suitability Index (HSI) models and was developed through an analysis of available scientific data on the habitat requirements of the veery. Habitat use information is presented in a review of the literature, followed by the development of an HSI model. The model is presented in three formats: graphic; word; and mathematical. Suitability index graphs quantify the species-habitat relationship. These data are synthesized into a model designed to provide information for use in impact assessment and habitat management.

  13. Optimizing Energy Consumption in Building Designs Using Building Information Model (BIM)

    NASA Astrophysics Data System (ADS)

    Egwunatum, Samuel; Joseph-Akwara, Esther; Akaigwe, Richard

    2016-09-01

    Given the ability of a Building Information Model (BIM) to serve as a multi-disciplinary data repository, this paper seeks to explore and exploit the sustainability value of Building Information Modelling/models in delivering buildings that require less energy for their operation, emit less CO2 and at the same time provide a comfortable living environment for their occupants. This objective was achieved by a critical and extensive review of the literature covering: (1) building energy consumption, (2) building energy performance and analysis, and (3) building information modeling and energy assessment. The literature cited in this paper showed that linking an energy analysis tool with a BIM model helped project design teams to predict and create optimized energy consumption. To validate this finding, an in-depth analysis was carried out on a completed BIM integrated construction project using the Arboleda Project in the Dominican Republic. The findings showed that the BIM-based energy analysis helped the design team achieve the world's first 103% positive energy building. From the research findings, the paper concludes that linking an energy analysis tool with a BIM model helps to expedite the energy analysis process, provide more detailed and accurate results as well as deliver energy-efficient buildings. The study further recommends that the adoption of a level 2 BIM and the integration of BIM in energy optimization analyse should be made compulsory for all projects irrespective of the method of procurement (government-funded or otherwise) or its size.

  14. DTIC (Defense Technical Information Center) Model Action Plan for Incorporating DGIS (DOD Gateway Information System) Capabilities.

    DTIC Science & Technology

    1986-05-01

    Information System (DGIS) is being developed to provide the DD crmjnj t with a modern tool to access diverse dtabaiees and extract information products...this community with a modern tool for accessing these databases and extracting information products from them. Since the Defense Technical Information...adjunct to DROLS xesults. The study , thereor. centerd around obtaining background information inside the unit on that unit’s users who request DROLS

  15. Innovative approach to the design and evaluation of treatment adherence interventions for drug-resistant TB.

    PubMed

    Alegria-Flores, K; Weiner, B J; Wiesen, C A; Lich, K L H; Van Rie, A; Paul, J E; Tovar, M A

    2017-11-01

    Drug-resistant tuberculosis (DR-TB) treatment is expensive, lengthy, and can cause severe side effects. Patients face socio-economic, psychosocial, and systemic barriers to adherence; poor adherence results in poor treatment outcomes. To estimate the effects of the components of the information-motivation-behavioral skills model on DR-TB treatment adherence. We recruited 326 adults receiving DR-TB treatment and 86 of their health care service providers from 40 health centers in Lima, Peru. The main outcome was adherence (i.e., the proportion of prescribed doses taken by a patient). Exposure measures were adherence information, motivation, and behavioral skills; loss to follow-up during previous TB treatment(s); providers' work engagement; and patient-perceived support from his/her social network. Structural equation modeling revealed that adherence information and motivation had positive effects on adherence, but only if mediated through behavioral skills (β = 0.02, P < 0.01 and β = 0.07, P < 0.001, respectively). Behavioral skills had a direct positive effect on adherence (β = 0.27, P < 0.001). Loss to follow-up during previous treatment had a direct negative effect, providers' work engagement had a direct positive effect, and perceived support had indirect positive effects on adherence. The model's overall R2 was 0.76. The components of the information-motivation-behavioral skills model were associated with adherence and could be used to design, monitor, and evaluate interventions targeting adherence to DR-TB treatment.

  16. Information Security and Data Breach Notification Safeguards

    DTIC Science & Technology

    2007-07-31

    for unauthorized purposes. Data breach notification requirements obligate covered entities to provide notice to affected persons (e.g., cardholders...customers) about the occurrence of a data security breach involving personally identifiable information. The first data breach notification law was...computerized personal information to disclose any breach of a resident’s personal information. S.B. 1386 was the model for subsequent data breach notification

  17. Nurses' Satisfaction With Using Nursing Information Systems From Technology Acceptance Model and Information Systems Success Model Perspectives: A Reductionist Approach.

    PubMed

    Lin, Hsien-Cheng

    2017-02-01

    Nursing information systems can enhance nursing practice and the efficiency and quality of administrative affairs within the nursing department and thus have been widely considered for implementation. Close alignment of human-computer interaction can advance optimal clinical performance with the use of information systems. However, a lack of introduction of the concept of alignment between users' perceptions and technological functionality has caused dissatisfaction, as shown in the existing literature. This study provides insight into the alignment between nurses' perceptions and how technological functionality affects their satisfaction with Nursing Information System use through a reductionist perspective of alignment. This cross-sectional study collected data from 531 registered nurses in Taiwan. The results indicated that "perceived usefulness in system quality alignment," "perceived usefulness in information quality alignment," "perceived ease of use in system quality alignment," "perceived ease of use in information quality alignment," and "perceived ease of use in service quality alignment" have significantly affected nurses' satisfaction with Nursing Information System use. However, "perceived usefulness in service quality alignment" had no significant effect on nurses' satisfaction. This study also provides some meaningful implications for theoretical and practical aspects of design.

  18. Faculty Salary Equity: Issues in Regression Model Selection. AIR 1992 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Moore, Nelle

    This paper discusses the determination of college faculty salary inequity and identifies the areas in which human judgment must be used in order to conduct a statistical analysis of salary equity. In addition, it provides some informed guidelines for making those judgments. The paper provides a framework for selecting salary equity models, based…

  19. Delineating generalized species boundaries from species distribution data and a species distribution model

    Treesearch

    Matthew P. Peters; Stephen N. Matthews; Louis R. Iverson; Anantha M. Prasad

    2013-01-01

    Species distribution models (SDM) are commonly used to provide information about species ranges or extents, and often are intended to represent the entire area of potential occupancy or suitable habitat in which individuals occur. While SDMs can provide results over various geographic extents, they normally operate within a grid and cannot delimit distinct, smooth...

  20. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

Top