Science.gov

Sample records for existing models quantitatively

  1. Comment on “Can existing models quantitatively describe the mixing behavior of acetone with water” [J. Chem. Phys. 130, 124516 (2009)

    PubMed Central

    Kang, Myungshim; Perera, Aurelien; Smith, Paul E.

    2009-01-01

    A recent publication indicated that simulations of acetone-water mixtures using the KBFF model for acetone indicate demixing at mole fractions less than 0.28 of acetone, in disagreement with experiment and two previously published studies. Here, we indicate some inconsistancies in the current study which could help to explain these differences. PMID:20568888

  2. A Primer on Quantitative Modeling.

    PubMed

    Neagu, Iulia; Levine, Erel

    2015-01-01

    Caenorhabditis elegans is particularly suitable for obtaining quantitative data about behavior, neuronal activity, gene expression, ecological interactions, quantitative traits, and much more. To exploit the full potential of these data one seeks to interpret them within quantitative models. Using two examples from the C. elegans literature we briefly explore several types of modeling approaches relevant to worm biology, and show how they might be used to interpret data, formulate testable hypotheses, and suggest new experiments. We emphasize that the choice of modeling approach is strongly dependent on the questions of interest and the type of available knowledge.

  3. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  4. Modeling Truth Existence in Truth Discovery.

    PubMed

    Zhi, Shi; Zhao, Bo; Tong, Wenzhu; Gao, Jing; Yu, Dian; Ji, Heng; Han, Jiawei

    2015-08-01

    When integrating information from multiple sources, it is common to encounter conflicting answers to the same question. Truth discovery is to infer the most accurate and complete integrated answers from conflicting sources. In some cases, there exist questions for which the true answers are excluded from the candidate answers provided by all sources. Without any prior knowledge, these questions, named no-truth questions, are difficult to be distinguished from the questions that have true answers, named has-truth questions. In particular, these no-truth questions degrade the precision of the answer integration system. We address such a challenge by introducing source quality, which is made up of three fine-grained measures: silent rate, false spoken rate and true spoken rate. By incorporating these three measures, we propose a probabilistic graphical model, which simultaneously infers truth as well as source quality without any a priori training involving ground truth answers. Moreover, since inferring this graphical model requires parameter tuning of the prior of truth, we propose an initialization scheme based upon a quantity named truth existence score, which synthesizes two indicators, namely, participation rate and consistency rate. Compared with existing methods, our method can effectively filter out no-truth questions, which results in more accurate source quality estimation. Consequently, our method provides more accurate and complete answers to both has-truth and no-truth questions. Experiments on three real-world datasets illustrate the notable advantage of our method over existing state-of-the-art truth discovery methods.

  5. Modeling Truth Existence in Truth Discovery

    PubMed Central

    Zhi, Shi; Zhao, Bo; Tong, Wenzhu; Gao, Jing; Yu, Dian; Ji, Heng; Han, Jiawei

    2015-01-01

    When integrating information from multiple sources, it is common to encounter conflicting answers to the same question. Truth discovery is to infer the most accurate and complete integrated answers from conflicting sources. In some cases, there exist questions for which the true answers are excluded from the candidate answers provided by all sources. Without any prior knowledge, these questions, named no-truth questions, are difficult to be distinguished from the questions that have true answers, named has-truth questions. In particular, these no-truth questions degrade the precision of the answer integration system. We address such a challenge by introducing source quality, which is made up of three fine-grained measures: silent rate, false spoken rate and true spoken rate. By incorporating these three measures, we propose a probabilistic graphical model, which simultaneously infers truth as well as source quality without any a priori training involving ground truth answers. Moreover, since inferring this graphical model requires parameter tuning of the prior of truth, we propose an initialization scheme based upon a quantity named truth existence score, which synthesizes two indicators, namely, participation rate and consistency rate. Compared with existing methods, our method can effectively filter out no-truth questions, which results in more accurate source quality estimation. Consequently, our method provides more accurate and complete answers to both has-truth and no-truth questions. Experiments on three real-world datasets illustrate the notable advantage of our method over existing state-of-the-art truth discovery methods. PMID:26705507

  6. LDEF data: Comparisons with existing models

    NASA Technical Reports Server (NTRS)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-01-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  7. Quantitative reactive modeling and verification.

    PubMed

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  8. Quantitative vortex models of turbulence

    NASA Astrophysics Data System (ADS)

    Pullin, D. I.

    2001-11-01

    This presentation will review attempts to develop models of turbulence, based on compact vortex elements, that can be used both to obtain quantitative estimates of various statistical properties of turbulent fine scales and also to formulate subgrid-transport models for large-eddy simulation (LES). Attention will be focused on a class of stretched-vortex models. Following a brief review of prior work, recent studies of vortex-based modeling of the small-scale behavior of a passive scalar will be discussed. The large-wavenumber spectrum of a passive scalar undergoing mixing by the velocity field of a stretched-spiral vortex will be shown to consist of the sum of two classical power laws, a k-1 Batchelor spectrum for wavenumbers up to the inverse Batchelor scale, and a k-5/3 Obukov-Corrsin spectrum for wavenumbers less than the inverse Kolmogorov scale (joint work with T.S. Lundgren). We will then focus on the use of stretched vortices as the basic subgrid structure in subgrid-scale (SGS) modeling for LES of turbulent flows. An SGS stress model and a vortex-based scalar-flux model for the LES of flows with turbulent mixing will be outlined. Application of these models to the LES of decaying turbulence, channel flow, the mixing of a passive scalar by homogeneous turbulence in the presence of a mean scalar gradient, and to the LES of compressible turbulence will be described.

  9. Interpreting snowpack radiometry using currently existing microwave radiative transfer models

    NASA Astrophysics Data System (ADS)

    Kang, Do-Hyuk; Tang, Shurun; Kim, Edward J.

    2015-10-01

    A radiative transfer model (RTM) to calculate the snow brightness temperatures (Tb) is a critical element in terrestrial snow parameter retrieval from microwave remote sensing observations. The RTM simulates the Tb based on a layered snow by solving a set of microwave radiative transfer equations. Even with the same snow physical inputs to drive the RTM, currently existing models such as Microwave Emission Model of Layered Snowpacks (MEMLS), Dense Media Radiative Transfer (DMRT-QMS), and Helsinki University of Technology (HUT) models produce different Tb responses. To backwardly invert snow physical properties from the Tb, differences from RTMs are first to be quantitatively explained. To this end, this initial investigation evaluates the sources of perturbations in these RTMs, and reveals the equations where the variations are made among the three models. Modelling experiments are conducted by providing the same but gradual changes in snow physical inputs such as snow grain size, and snow density to the 3 RTMs. Simulations are conducted with the frequencies consistent with the Advanced Microwave Scanning Radiometer- E (AMSR-E) at 6.9, 10.7, 18.7, 23.8, 36.5, and 89.0 GHz. For realistic simulations, the 3 RTMs are simultaneously driven by the same snow physics model with the meteorological forcing datasets and are validated against the snow insitu samplings from the CLPX (Cold Land Processes Field Experiment) 2002-2003, and NoSREx (Nordic Snow Radar Experiment) 2009-2010.

  10. Interpreting snowpack radiometry using currently existing microwave radiative transfer models

    NASA Astrophysics Data System (ADS)

    Kang, D. H.; Tan, S.; Kim, E. J.

    2015-12-01

    A radiative transfer model (RTM) to calculate a snow brightness temperature (Tb) is a critical element to retrieve terrestrial snow from microwave remote sensing observations. The RTM simulates the Tb based on a layered snow by solving a set of microwave radiative transfer formulas. Even with the same snow physical inputs used for the RTM, currently existing models such as Microwave Emission Model of Layered Snowpacks (MEMLS), Dense Media Radiative Transfer (DMRT-Tsang), and Helsinki University of Technology (HUT) models produce different Tb responses. To backwardly invert snow physical properties from the Tb, the differences from the RTMs are to be quantitatively explained. To this end, the paper evaluates the sources of perturbations in the RTMs, and reveals the equations where the variations are made among three models. Investigations are conducted by providing the same but gradual changes in snow physical inputs such as snow grain size, and snow density to the 3 RTMs. Simulations are done with the frequencies consistent with the Advanced Microwave Scanning Radiometer-E (AMSR-E) at 6.9, 10.7, 18.7, 23.8, 36.5, and 89.0 GHz. For realistic simulations, the 3 RTMs are simultaneously driven by the same snow physics model with the meteorological forcing datasets and are validated from the snow core samplings from the CLPX (Cold Land Processes Field Experiment) 2002-2003, and NoSREx (Nordic Snow Radar Experiment) 2009-2010.

  11. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  12. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  13. Progress on Quantitative Modeling of rf Sheaths

    NASA Astrophysics Data System (ADS)

    D'Ippolito, D. A.; Myra, J. R.; Kohno, H.; Wright, J. C.

    2011-12-01

    A new quantitative approach for computing the rf sheath potential is described, which incorporates plasma dielectric effects and the relative geometry of the magnetic field and the material boundaries. The new approach uses a modified boundary condition ("rf sheath BC") that couples the rf waves and the sheaths at the boundary. It treats the sheath as a thin vacuum region and matches the fields across the plasma-vacuum boundary. When combined with the Child-Langmuir Law (relating the sheath width and sheath potential), the model permits a self-consistent determination of the sheath parameters and the rf electric field at the sheath-plasma boundary. Semi-analytic models using this BC predict a number of general features, including a sheath voltage threshold, a dimensionless parameter characterizing rf sheath effects, and the existence of sheath plasma waves with an associated resonance. Since the sheath BC is nonlinear and dependent on geometry, computing the sheath potential numerically is a challenging computational problem. Numerical results will be presented from a new parallel-processing finite-element rf wave code for the tokamak scrape-off layer (called "rfSOL"). The code has verified the physics predicted by analytic theory in 1D, and extended the solutions into model 2D geometries. The numerical calculations confirm the existence of multiple roots and hysteresis effects, and parameter studies have been carried out. Areas for future work will be discussed.

  14. Integrated Environmental Modeling: Quantitative Microbial Risk Assessment

    EPA Science Inventory

    The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...

  15. Integrated Environmental Modeling: Quantitative Microbial Risk Assessment

    EPA Science Inventory

    The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...

  16. Quantitative structure - mesothelioma potency model ...

    EPA Pesticide Factsheets

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  17. Global existence for a degenerate haptotaxis model of cancer invasion

    NASA Astrophysics Data System (ADS)

    Zhigun, Anna; Surulescu, Christina; Uatay, Aydar

    2016-12-01

    We propose and study a strongly coupled PDE-ODE system with tissue-dependent degenerate diffusion and haptotaxis that can serve as a model prototype for cancer cell invasion through the extracellular matrix. We prove the global existence of weak solutions and illustrate the model behavior by numerical simulations for a two-dimensional setting.

  18. 6 Principles for Quantitative Reasoning and Modeling

    ERIC Educational Resources Information Center

    Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye

    2014-01-01

    Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…

  19. 6 Principles for Quantitative Reasoning and Modeling

    ERIC Educational Resources Information Center

    Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye

    2014-01-01

    Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…

  20. Is It Possible to Prove the Existence of an Aging Program by Quantitative Analysis of Mortality Dynamics?

    PubMed

    Shilovsky, G A; Putyatina, T S; Lysenkov, S N; Ashapkin, V V; Luchkina, O S; Markov, A V; Skulachev, V P

    2016-12-01

    Accumulation of various types of lesions in the course of aging increases an organism's vulnerability and results in a monotonous elevation of mortality rate, irrespective of the position of a species on the evolutionary tree. Stroustrup et al. (Nature, 530, 103-107) [1] showed in 2016 that in the nematode Caenorhabditis elegans, longevity-altering factors (e.g. oxidative stress, temperature, or diet) do not change the shape of the survival curve, but either stretch or shrink it along the time axis, which the authors attributed to the existence of an "aging program". Modification of the accelerated failure time model by Stroustrup et al. uses temporal scaling as a basic approach for distinguishing between quantitative and qualitative changes in aging dynamics. Thus we analyzed data on the effects of various longevity-increasing genetic manipulations in flies, worms, and mice and used several models to choose a theory that would best fit the experimental results. The possibility to identify the moment of switch from a mortality-governing pathway to some other pathways might be useful for testing geroprotective drugs. In this work, we discuss this and other aspects of temporal scaling.

  1. Facilities Management of Existing School Buildings: Two Models.

    ERIC Educational Resources Information Center

    Building Technology, Inc., Silver Spring, MD.

    While all school districts are responsible for the management of their existing buildings, they often approach the task in different ways. This document presents two models that offer ways a school district administration, regardless of size, may introduce activities into its ongoing management process that will lead to improvements in earthquake…

  2. Training of Existing Workers: Issues, Incentives and Models. Support Document

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This document was produced by the authors based on their research for the report, "Training of Existing Workers: Issues, Incentives and Models," (ED495138) and is an added resource for further information. This support document is divided into the following sections: (1) The Retail Industry--A Snapshot; (2) Case Studies--Hardware, Retail…

  3. Small data global existence for a fluid-structure model

    NASA Astrophysics Data System (ADS)

    Ignatova, Mihaela; Kukavica, Igor; Lasiecka, Irena; Tuffaha, Amjad

    2017-02-01

    We address the system of partial differential equations modeling motion of an elastic body inside an incompressible fluid. The fluid is modeled by the incompressible Navier-Stokes equations while the structure is represented by the damped wave equation with interior damping. The additional boundary stabilization γ, considered in our previous paper, is no longer necessary. We prove the global existence and exponential decay of solutions for small initial data in a suitable Sobolev space.

  4. Abundant Quantitative Trait Loci Exist for DNA Methylation and Gene Expression in Human Brain

    PubMed Central

    Traynor, Bryan J.; Nalls, Michael A.; Lai, Shiao-Lin; Arepalli, Sampath; Dillman, Allissa; Rafferty, Ian P.; Troncoso, Juan; Johnson, Robert; Zielke, H. Ronald; Ferrucci, Luigi; Longo, Dan L.; Cookson, Mark R.; Singleton, Andrew B.

    2010-01-01

    A fundamental challenge in the post-genome era is to understand and annotate the consequences of genetic variation, particularly within the context of human tissues. We present a set of integrated experiments that investigate the effects of common genetic variability on DNA methylation and mRNA expression in four human brain regions each from 150 individuals (600 samples total). We find an abundance of genetic cis regulation of mRNA expression and show for the first time abundant quantitative trait loci for DNA CpG methylation across the genome. We show peak enrichment for cis expression QTLs to be approximately 68,000 bp away from individual transcription start sites; however, the peak enrichment for cis CpG methylation QTLs is located much closer, only 45 bp from the CpG site in question. We observe that the largest magnitude quantitative trait loci occur across distinct brain tissues. Our analyses reveal that CpG methylation quantitative trait loci are more likely to occur for CpG sites outside of islands. Lastly, we show that while we can observe individual QTLs that appear to affect both the level of a transcript and a physically close CpG methylation site, these are quite rare. We believe these data, which we have made publicly available, will provide a critical step toward understanding the biological effects of genetic variation. PMID:20485568

  5. Quantitative Modeling of Earth Surface Processes

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.

  6. More details...
  7. A Quantitative Model for Assessing Faculty Promotion.

    ERIC Educational Resources Information Center

    Tekian, Ara; And Others

    This paper describes a quantitative model that can be used to evaluate faculty performance for promotion decisions. Through the use of an evaluation form, the system (1) informs faculty members how they will be evaluated at the end of each academic year; (2) allows faculty growth to be documented in teaching, research, and other activities which…

  8. The existence of amorphous phase in Portland cements: Physical factors affecting Rietveld quantitative phase analysis

    SciTech Connect

    Snellings, Ruben Bazzoni, Amélie Scrivener, Karen

    2014-05-01

    Rietveld quantitative phase analysis has become a widespread tool for the characterization of Portland cement, both for research and production control purposes. One of the major remaining points of debate is whether Portland cements contain amorphous content or not. This paper presents detailed analyses of the amorphous phase contents in a set of commercial Portland cements, clinker, synthetic alite and limestone by Rietveld refinement of X-ray powder diffraction measurements using both external and internal standard methods. A systematic study showed that the sample preparation and comminution procedure is closely linked to the calculated amorphous contents. Particle size reduction by wet-grinding lowered the calculated amorphous contents to insignificant quantities for all materials studied. No amorphous content was identified in the final analysis of the Portland cements under investigation.

  9. Dynamic decision modeling in medicine: a critique of existing formalisms.

    PubMed Central

    Leong, T. Y.

    1993-01-01

    Dynamic decision models are frameworks for modeling and solving decision problems that take into explicit account the effects of time. These formalisms are based on structural and semantical extensions of conventional decision models, e.g., decision trees and influence diagrams, with the mathematical definitions of finite-state semi-Markov processes. This paper identifies the common theoretical basis of existing dynamic decision modeling formalisms, and compares and contrasts their applicability and efficiency. It also argues that a subclass of such dynamic decision problems can be formulated and solved more effectively with non-graphical techniques. Some insights gained from this exercise on automating the dynamic decision making process are summarized. PMID:8130519

  10. A quantitative comparison of Calvin-Benson cycle models.

    PubMed

    Arnold, Anne; Nikoloski, Zoran

    2011-12-01

    The Calvin-Benson cycle (CBC) provides the precursors for biomass synthesis necessary for plant growth. The dynamic behavior and yield of the CBC depend on the environmental conditions and regulation of the cellular state. Accurate quantitative models hold the promise of identifying the key determinants of the tightly regulated CBC function and their effects on the responses in future climates. We provide an integrative analysis of the largest compendium of existing models for photosynthetic processes. Based on the proposed ranking, our framework facilitates the discovery of best-performing models with regard to metabolomics data and of candidates for metabolic engineering. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Quantitative modeling of soil genesis processes

    NASA Technical Reports Server (NTRS)

    Levine, E. R.; Knox, R. G.; Kerber, A. G.

    1992-01-01

    For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.

  12. Quantitative modeling of soil genesis processes

    NASA Technical Reports Server (NTRS)

    Levine, E. R.; Knox, R. G.; Kerber, A. G.

    1992-01-01

    For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.

  13. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  14. Existence of needle crystals in local models of solidification

    NASA Technical Reports Server (NTRS)

    Langer, J. S.

    1986-01-01

    The way in which surface tension acts as a singular perturbation to destroy the continuous family of needle-crystal solutions of the steady-state growth equations is analyzed in detail for two local models of solidification. All calculations are performed in the limit of small surface tension or, equivalently, small velocity. The basic mathematical ideas are introduced in connection with a quasilinear, isotropic version of the geometrical model of Brower et al., in which case the continuous family of solutions dissappears completely. The formalism is then applied to a simplified boundary-layer model with an anisotropic kinetic attachment coefficient. In the latter case, the solvability condition for the existence of needle crystals can be satisfied whenever the coefficient of anisotropy is arbitrarily small but nonzero.

  15. Quantitative Nanostructure-Activity Relationship (QNAR) Modeling

    PubMed Central

    Fourches, Denis; Pu, Dongqiuye; Tassa, Carlos; Weissleder, Ralph; Shaw, Stanley Y.; Mumper, Russell J.; Tropsha, Alexander

    2010-01-01

    Evaluation of biological effects, both desired and undesired, caused by Manufactured NanoParticles (MNPs) is of critical importance for nanotechnology. Experimental studies, especially toxicological, are time-consuming, costly, and often impractical, calling for the development of efficient computational approaches capable of predicting biological effects of MNPs. To this end, we have investigated the potential of cheminformatics methods such as Quantitative Structure – Activity Relationship (QSAR) modeling to establish statistically significant relationships between measured biological activity profiles of MNPs and their physical, chemical, and geometrical properties, either measured experimentally or computed from the structure of MNPs. To reflect the context of the study, we termed our approach Quantitative Nanostructure-Activity Relationship (QNAR) modeling. We have employed two representative sets of MNPs studied recently using in vitro cell-based assays: (i) 51 various MNPs with diverse metal cores (PNAS, 2008, 105, pp 7387–7392) and (ii) 109 MNPs with similar core but diverse surface modifiers (Nat. Biotechnol., 2005, 23, pp 1418–1423). We have generated QNAR models using machine learning approaches such as Support Vector Machine (SVM)-based classification and k Nearest Neighbors (kNN)-based regression; their external prediction power was shown to be as high as 73% for classification modeling and R2 of 0.72 for regression modeling. Our results suggest that QNAR models can be employed for: (i) predicting biological activity profiles of novel nanomaterials, and (ii) prioritizing the design and manufacturing of nanomaterials towards better and safer products. PMID:20857979

  16. Quantitative Microbiologic Models for Preterm Delivery

    PubMed Central

    Onderdonk, Andrew B.; Lee, Mei-Ling; Lieberman, Ellice; Delaney, Mary L.; Tuomala, Ruth E.

    2003-01-01

    Preterm delivery (PTD) is the leading cause of infant morbidity and mortality in the United States. An epidemiological association between PTD and various bacteria that are part of the vaginal microflora has been reported. No single bacterial species has been identified as being causally associated with PTD, suggesting a multifactorial etiology. Quantitative microbiologic cultures have been used previously to define normal vaginal microflora in a predictive model. These techniques have been applied to vaginal swab cultures from pregnant women in an effort to develop predictive microbiologic models for PTD. Logistic regression analysis with microbiologic information was performed for various risk groups, and the probability of a PTD was calculated for each subject. Four predictive models were generated by using the quantitative microbiologic data. The area under the curve (AUC) for the receiver operating curves ranged from 0.74 to 0.94, with confidence intervals (CI) ranging from 0.62 to 1. The model for the previous PTD risk group with the highest percentage of PTDs had an AUC of 0.91 (CI, 0.79 to 1). It may be possible to predict PTD by using microbiologic risk factors measured once the gestation period has reached the 20-week time point. PMID:12624032

  17. Existence of the Entropy Solution for a Viscoelastic Model

    NASA Astrophysics Data System (ADS)

    Zhu, Changjiang

    1998-06-01

    In this paper, we consider the Cauchy problem for a viscoelastic model with relaxationut+σx=0, (σ-f(u))t+{1}/{δ} (σ-μf(u))=0with discontinuous, large initial data, where 0<μ<1,δ>0 are constants. When the system is nonstrictly hyperbolic, under the additional assumptionv0x∈L∞, the system is reduced to an inhomogeneous scalar balance law by employing the special form of the system itself. After introducing a definition of entropy solutions to the system, we prove the existence, uniqueness, and continuous dependence of the global entropy solution for the system. When the system is strictly hyperbolic, some special entropy pairs of the Lax type are constructed, in which the progression terms are functions of a single variable, and the necessary estimates for the major terms are obtained by using the theory of singular perturbation of the ordinary differential equations. The special entropy pairs are used to prove the existence of the global entropy solutions for the corresponding Cauchy problem by applying the method of compensated compactness

  18. New Quantitative Tectonic Models for Southern Alaska

    NASA Astrophysics Data System (ADS)

    Fletcher, H. J.; Freymueller, J. T.

    2002-12-01

    Crustal deformation studies using GPS have made important contributions to our knowledge of the tectonics of Alaska. Since 1995, we have determined precise GPS velocities for more than 350 sites throughout Alaska. We use a subset of these sites to study permanent deformation of the overriding North American plate, in particular the motion on the strike-slip Denali and Fairweather faults and the deformation of interior Alaska. Velocities from almost 100 GPS sites help us to determine how the Pacific-North American plate boundary deformation is distributed and which structures other than the Alaska-Aleutian megathrust are important in accommodating the relative motion of the plates. Based on the GPS velocities, we have constructed new quantitative tectonic models for Alaska. Our models are based on, and a considerable improvement to the model of Lahr and Plafker [1980]. The fundamental difference between our proposed models and theirs is that we use measured slip rates rather than assumed rates or guesses. We thus present the first truly quantitative tectonic models for the deformation of the overriding plate in Alaska, although several important parts of the models need additional data to constrain them. Using dislocation modeling techniques, we estimate slip rates for the McKinley segment of the Denali fault and the Fairweather fault to be approximately 9 mm/yr and 46 mm/yr, respectively. We present three models, all of which involve the Yakutat block, Fairweather block (called the St. Elias block by Lahr and Plafker [1980]), and the southern Alaska block (called the Wrangell block by Lahr and Plafker [1980]). The western boundary to the Southern Alaska block is the most speculative, and the nature and location of this boundary are the only differences between our three proposed models. For each crustal block we determine an Euler pole and angular rotation rate and calculate the slip rates across the boundaries between the blocks. The models provide a first step

  19. Existence of Periodic Solutions for a Modified Growth Solow Model

    NASA Astrophysics Data System (ADS)

    Fabião, Fátima; Borges, Maria João

    2010-10-01

    In this paper we analyze the dynamic of the Solow growth model with a Cobb-Douglas production function. For this purpose, we consider that the labour growth rate, L'(t)/L(t), is a T-periodic function, for a fixed positive real number T. We obtain the closed form solutions for the fundamental Solow equation with the new description of L(t). Using notions of the qualitative theory of ordinary differential equations and nonlinear functional analysis, we prove that there exists one T-periodic solution for the Solow equation. From the economic point of view this is a new result which allows a more realistic interpretation of the stylized facts.

  20. Quantitative risk modelling for new pharmaceutical compounds.

    PubMed

    Tang, Zhengru; Taylor, Mark J; Lisboa, Paulo; Dyas, Mark

    2005-11-15

    The process of discovering and developing new drugs is long, costly and risk-laden. Faced with a wealth of newly discovered compounds, industrial scientists need to target resources carefully to discern the key attributes of a drug candidate and to make informed decisions. Here, we describe a quantitative approach to modelling the risk associated with drug development as a tool for scenario analysis concerning the probability of success of a compound as a potential pharmaceutical agent. We bring together the three strands of manufacture, clinical effectiveness and financial returns. This approach involves the application of a Bayesian Network. A simulation model is demonstrated with an implementation in MS Excel using the modelling engine Crystal Ball.

  21. Synthetic quantitative MRI through relaxometry modelling.

    PubMed

    Callaghan, Martina F; Mohammadi, Siawoosh; Weiskopf, Nikolaus

    2016-12-01

    Quantitative MRI (qMRI) provides standardized measures of specific physical parameters that are sensitive to the underlying tissue microstructure and are a first step towards achieving maps of biologically relevant metrics through in vivo histology using MRI. Recently proposed models have described the interdependence of qMRI parameters. Combining such models with the concept of image synthesis points towards a novel approach to synthetic qMRI, in which maps of fundamentally different physical properties are constructed through the use of biophysical models. In this study, the utility of synthetic qMRI is investigated within the context of a recently proposed linear relaxometry model. Two neuroimaging applications are considered. In the first, artefact-free quantitative maps are synthesized from motion-corrupted data by exploiting the over-determined nature of the relaxometry model and the fact that the artefact is inconsistent across the data. In the second application, a map of magnetization transfer (MT) saturation is synthesized without the need to acquire an MT-weighted volume, which directly leads to a reduction in the specific absorption rate of the acquisition. This feature would be particularly important for ultra-high field applications. The synthetic MT map is shown to provide improved segmentation of deep grey matter structures, relative to segmentation using T1 -weighted images or R1 maps. The proposed approach of synthetic qMRI shows promise for maximizing the extraction of high quality information related to tissue microstructure from qMRI protocols and furthering our understanding of the interrelation of these qMRI parameters. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.

  1. Synthetic quantitative MRI through relaxometry modelling

    PubMed Central

    Mohammadi, Siawoosh; Weiskopf, Nikolaus

    2016-01-01

    Abstract Quantitative MRI (qMRI) provides standardized measures of specific physical parameters that are sensitive to the underlying tissue microstructure and are a first step towards achieving maps of biologically relevant metrics through in vivo histology using MRI. Recently proposed models have described the interdependence of qMRI parameters. Combining such models with the concept of image synthesis points towards a novel approach to synthetic qMRI, in which maps of fundamentally different physical properties are constructed through the use of biophysical models. In this study, the utility of synthetic qMRI is investigated within the context of a recently proposed linear relaxometry model. Two neuroimaging applications are considered. In the first, artefact‐free quantitative maps are synthesized from motion‐corrupted data by exploiting the over‐determined nature of the relaxometry model and the fact that the artefact is inconsistent across the data. In the second application, a map of magnetization transfer (MT) saturation is synthesized without the need to acquire an MT‐weighted volume, which directly leads to a reduction in the specific absorption rate of the acquisition. This feature would be particularly important for ultra‐high field applications. The synthetic MT map is shown to provide improved segmentation of deep grey matter structures, relative to segmentation using T 1‐weighted images or R 1 maps. The proposed approach of synthetic qMRI shows promise for maximizing the extraction of high quality information related to tissue microstructure from qMRI protocols and furthering our understanding of the interrelation of these qMRI parameters. PMID:27753154

  2. Comparative Application of Capacity Models for Seismic Vulnerability Evaluation of Existing RC Structures

    SciTech Connect

    Faella, C.; Lima, C.; Martinelli, E.; Nigro, E.

    2008-07-08

    Seismic vulnerability assessment of existing buildings is one of the most common tasks in which Structural Engineers are currently engaged. Since, its is often a preliminary step to approach the issue of how to retrofit non-seismic designed and detailed structures, it plays a key role in the successful choice of the most suitable strengthening technique. In this framework, the basic information for both seismic assessment and retrofitting is related to the formulation of capacity models for structural members. Plenty of proposals, often contradictory under the quantitative standpoint, are currently available within the technical and scientific literature for defining the structural capacity in terms of force and displacements, possibly with reference to different parameters representing the seismic response. The present paper shortly reviews some of the models for capacity of RC members and compare them with reference to two case studies assumed as representative of a wide class of existing buildings.

  3. Magnetospheric mapping with quantitative geomagnetic field models

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Mead, G. D.

    1973-01-01

    The Mead-Fairfield geomagnetic field models were used to trace field lines between the outer magnetosphere and the earth's surface. The results are presented in terms of ground latitude and local time contours projected to the equatorial plane and into the geomagnetic tail. With these contours various observations can be mapped along field lines between high and low altitudes. Low altitudes observations of the polar cap boundary, the polar cusp, the energetic electron trapping boundary and the sunward convection region are projected to the equatorial plane and compared with the results of the model and with each other. The results provide quantitative support to the earlier suggestions that the trapping boundary is associated with the last closed field line in the sunward hemisphere, the polar cusp is associated with the region of the last closed field line, and the polar cap projects to the geomagnetic tail and has a low latitude boundary corresponding to the last closed field line.

  4. Global Quantitative Modeling of Chromatin Factor Interactions

    PubMed Central

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  5. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  6. A Transformative Model for Undergraduate Quantitative Biology Education

    PubMed Central

    Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  7. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  8. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals

    EPA Science Inventory

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This manuscript reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, inver...

  9. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals

    EPA Science Inventory

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This manuscript reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, inver...

  10. First Principles Quantitative Modeling of Molecular Devices

    NASA Astrophysics Data System (ADS)

    Ning, Zhanyu

    In this thesis, we report theoretical investigations of nonlinear and nonequilibrium quantum electronic transport properties of molecular transport junctions from atomistic first principles. The aim is to seek not only qualitative but also quantitative understanding of the corresponding experimental data. At present, the challenges to quantitative theoretical work in molecular electronics include two most important questions: (i) what is the proper atomic model for the experimental devices? (ii) how to accurately determine quantum transport properties without any phenomenological parameters? Our research is centered on these questions. We have systematically calculated atomic structures of the molecular transport junctions by performing total energy structural relaxation using density functional theory (DFT). Our quantum transport calculations were carried out by implementing DFT within the framework of Keldysh non-equilibrium Green's functions (NEGF). The calculated data are directly compared with the corresponding experimental measurements. Our general conclusion is that quantitative comparison with experimental data can be made if the device contacts are correctly determined. We calculated properties of nonequilibrium spin injection from Ni contacts to octane-thiolate films which form a molecular spintronic system. The first principles results allow us to establish a clear physical picture of how spins are injected from the Ni contacts through the Ni-molecule linkage to the molecule, why tunnel magnetoresistance is rapidly reduced by the applied bias in an asymmetric manner, and to what extent ab initio transport theory can make quantitative comparisons to the corresponding experimental data. We found that extremely careful sampling of the two-dimensional Brillouin zone of the Ni surface is crucial for accurate results in such a spintronic system. We investigated the role of contact formation and its resulting structures to quantum transport in several molecular

  11. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  12. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  13. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  14. Toward quantitative modeling of silicon phononic thermocrystals

    SciTech Connect

    Lacatena, V.; Haras, M.; Robillard, J.-F. Dubois, E.; Monfray, S.; Skotnicki, T.

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  15. Toward quantitative modeling of silicon phononic thermocrystals

    NASA Astrophysics Data System (ADS)

    Lacatena, V.; Haras, M.; Robillard, J.-F.; Monfray, S.; Skotnicki, T.; Dubois, E.

    2015-03-01

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of "thermocrystals" or "nanophononic crystals" that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known "electron crystal-phonon glass" dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  16. Quantitative modeling of multiscale neural activity

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Rennie, Christopher J.

    2007-01-01

    The electrical activity of the brain has been observed for over a century and is widely used to probe brain function and disorders, chiefly through the electroencephalogram (EEG) recorded by electrodes on the scalp. However, the connections between physiology and EEGs have been chiefly qualitative until recently, and most uses of the EEG have been based on phenomenological correlations. A quantitative mean-field model of brain electrical activity is described that spans the range of physiological and anatomical scales from microscopic synapses to the whole brain. Its parameters measure quantities such as synaptic strengths, signal delays, cellular time constants, and neural ranges, and are all constrained by independent physiological measurements. Application of standard techniques from wave physics allows successful predictions to be made of a wide range of EEG phenomena, including time series and spectra, evoked responses to stimuli, dependence on arousal state, seizure dynamics, and relationships to functional magnetic resonance imaging (fMRI). Fitting to experimental data also enables physiological parameters to be infered, giving a new noninvasive window into brain function, especially when referenced to a standardized database of subjects. Modifications of the core model to treat mm-scale patchy interconnections in the visual cortex are also described, and it is shown that resulting waves obey the Schroedinger equation. This opens the possibility of classical cortical analogs of quantum phenomena.

  17. Existing Soil Carbon Models Do Not Apply to Forested Wetlands.

    SciTech Connect

    Trettin, C C; Song, B; Jurgensen, M F; Li, C

    2001-09-14

    Evaluation of 12 widely used soil carbon models to determine applicability to wetland ecosystems. For any land area that includes wetlands, none of the individual models would produce reasonable simulations based on soil processes. Study presents a wetland soil carbon model framework based on desired attributes, the DNDC model and components of the CENTURY and WMEM models. Proposed synthesis would be appropriate when considering soil carbon dynamics at multiple spatial scales and where the land area considered includes both wetland and upland ecosystems.

  18. Training of Existing Workers: Issues, Incentives and Models

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This report presents issues associated with incentives for training existing workers in small to medium-sized firms, identified through a small sample of case studies from the retail, manufacturing, and building and construction industries. While the majority of employers recognise workforce skill levels are fundamental to the success of the…

  19. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  20. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    PubMed Central

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  1. Quantitative assessment of computational models for retinotopic map formation

    PubMed Central

    Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2014-01-01

    ABSTRACT Molecular and activity‐based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity‐based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2‐EphA3 ki/ki, Isl2‐EphA3 ki/+, ephrin‐A2,A3,A5 triple knock‐out (TKO), and Math5 −/− (Atoh7). Two models successfully reproduced the extent of the Math5 −/− anteromedial projection, but only one of those could account for the collapse point in Isl2‐EphA3 ki/+. The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin‐A2,A3,A5 TKO phenotype, suggesting either an incomplete knock‐out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. © 2014 Wiley Periodicals, Inc. Develop Neurobiol 75: 641–666, 2015 PMID:25367067

  2. The challenge of integrating new online education packages into existing curricula: a new model.

    PubMed

    Grant, Janet; Owen, Heather; Sandars, John; Walsh, Kieran; Richardson, Judith; Rutherford, Alaster; Siddiqi, Kamran; Ibison, Judith; Maxted, Mairead

    2011-01-01

    In 2009, The National Institute for Health and Clinical Excellence (NICE) developed an undergraduate online learning package on the practical application of evidence-based medicine with the intention that it would be integrated into existing medical curricula. Complementary methodologies were used to yield a diversity of quantitative and qualitative data on how the online learning package was integrated. The modules of the online learning package received an overall positive reaction from the users but uptake of the modules was lower than expected. Even though some curriculum integration occurred, several students were unaware that the package existed, some lacked the time to use the package and others would have preferred to have had the package earlier in their course. A new model for the effective integration of online education packages into existing undergraduate medical curricula is proposed, especially when developed by external organisations. This new model should enable educationalists to better reveal and overcome the contextual and process challenges, barriers and solutions to implementing effective flexible learning approaches. When introducing new learning resources into a curriculum, many factors are important, especially the learners' perceived needs and how these vary at different stages of their course.

  3. Nursing in disasters: A review of existing models.

    PubMed

    Pourvakhshoori, Negar; Norouzi, Kian; Ahmadi, Fazlollah; Hosseini, Mohammadali; Khankeh, Hamidreza

    2017-03-01

    Since nurses play an important role in responding to disasters, evaluating their knowledge on common patterns of disasters is a necessity. This study examined researches conducted using disaster nursing as well as the models adopted. It provides a critical analysis of the models available for disaster nursing. International electronic databases including Scopus, PubMed, ISI Web of Science, Cochrane Library, Cumulative Index to Nursing and Allied Health (CINAHL), and Google Scholar were investigated with no limitation on type of articles, between 1st January 1980 and 31st January 2016. The search terms and strategy were as follows: (Disaster(∗) OR Emergenc(∗)) AND (Model OR Theory OR Package OR Pattern) AND (Nursing OR Nurse(∗)). They were applied for titles, abstracts and key words. This resulted in the generation of disaster nursing models. Out of the 1983 publications initially identified, the final analysis was conducted on 8 full text articles. These studies presented seven models. These evinced a diverse set of models with regard to the domains and the target population. Although, disaster nursing models will inform disaster risk reduction strategies, attempts to systematically do so are in preliminary phases. Further investigation is needed to develop a domestic nursing model in the event of disasters. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Mathematical Existence Results for the Doi-Edwards Polymer Model

    NASA Astrophysics Data System (ADS)

    Chupin, Laurent

    2017-01-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  5. Continuous-time random walk models of DNA electrophoresis in a post array: part I. Evaluation of existing models.

    PubMed

    Olson, Daniel W; Ou, Jia; Tian, Mingwei; Dorfman, Kevin D

    2011-02-01

    Several continuous-time random walk (CTRW) models exist to predict the dynamics of DNA in micropost arrays, but none of them quantitatively describes the separation seen in experiments or simulations. In Part I of this series, we examine the assumptions underlying these models by observing single molecules of λ DNA during electrophoresis in a regular, hexagonal array of oxidized silicon posts. Our analysis takes advantage of a combination of single-molecule videomicroscopy and previous Brownian dynamics simulations. Using a custom-tracking program, we automatically identify DNA-post collisions and thus study a large ensemble of events. Our results show that the hold-up time and the distance between collisions for consecutive collisions are uncorrelated. The distance between collisions is a random variable, but it can be smaller than the minimum value predicted by existing models of DNA transport in post arrays. The current CTRW models correctly predict the exponential decay in the probability density of the collision hold-up times, but they fail to account for the influence of finite-sized posts on short hold-up times. The shortcomings of the existing models identified here motivate the development of a new CTRW approach, which is presented in Part II of this series.

  6. Existence of solutions for a host-parasite model

    NASA Astrophysics Data System (ADS)

    Milner, Fabio Augusto; Patton, Curtis Allan

    2001-12-01

    The sea bass Dicentrarchus labrax has several gill ectoparasites. Diplectanum aequans (Plathelminth, Monogenea) is one of these species. Under certain demographic conditions, this flat worm can trigger pathological problems, in particular in fish farms. The life cycle of the parasite is described and a model for the dynamics of its interaction with the fish is described and analyzed. The model consists of a coupled system of ordinary differential equations and one integro-differential equation.

  7. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    SciTech Connect

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  8. Quantitative analysis of numerical solvers for oscillatory biomolecular system models

    PubMed Central

    Quo, Chang F; Wang, May D

    2008-01-01

    Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible

  9. LDEF data correlation to existing NASA debris environment models

    NASA Technical Reports Server (NTRS)

    Atkinson, Dale R.; Allbrooks, Martha K.; Watts, Alan J.

    1991-01-01

    Since the Long Duration Exposure Facility was gravity gradient stabilized and did not rotate, the directional dependence of the flux can be easily distinguished. During the deintegration of LDEF, all impact features larger than 0.5 mm into aluminum were documented for diameters and locations. In addition, all diameters and locations of all impact features larger than 0.3 mm into Scheldahl G411500 thermal control blankets were also documented. This data, along with additional information collected from LDEF materials archived at NASA Johnson Space Center (JSC) on smaller features, will be compared with current meteoroid and debris models. This comparison will provide a validation of the models and will identify discrepancies between the models and the data.

  10. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  11. Exploring Higher Education Business Models ("If Such a Thing Exists")

    ERIC Educational Resources Information Center

    Harney, John O.

    2013-01-01

    The global economic recession has caused students, parents, and policymakers to reevaluate personal and societal investments in higher education--and has prompted the realization that traditional higher ed "business models" may be unsustainable. Predicting a shakeout, most presidents expressed confidence for their own school's ability to…

  12. Existing Soil Carbon Models Do Not Apply to Forested Wetlands

    Treesearch

    Carl C. Trettin; B. Song; M.F. Jurgensen; C. Li

    2001-01-01

    When assessing the biological,geological,and chemical cycling of nutrients and elements — or when assessing carbon dynamics with respect to global change — modeling and simulation are necessary. Although wetlands occupy a relatively small proportion of Earth’s terrestrial surface (

  13. Determining if Instructional Delivery Model Differences Exist in Remedial English

    ERIC Educational Resources Information Center

    Carter, LaTanya Woods

    2012-01-01

    The purpose of this causal comparative study is to test the theory of no significant difference that compares pre- and post-test assessment scores, controlling for the instructional delivery model of online and face-to-face students at a Mid-Atlantic university. Online education and virtual distance learning programs have increased in popularity…

  14. Exploring Higher Education Business Models ("If Such a Thing Exists")

    ERIC Educational Resources Information Center

    Harney, John O.

    2013-01-01

    The global economic recession has caused students, parents, and policymakers to reevaluate personal and societal investments in higher education--and has prompted the realization that traditional higher ed "business models" may be unsustainable. Predicting a shakeout, most presidents expressed confidence for their own school's ability to…

  15. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    PubMed

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.

  16. LDEF data correlation to existing NASA debris environment models

    NASA Technical Reports Server (NTRS)

    Atkinson, Dale R.; Allbrooks, Martha K.; Watts, Alan J.

    1992-01-01

    The Long Duration Exposure Facility (LDEF) was recovered in January 1990, following 5.75 years exposure of about 130 sq. m to low-Earth orbit. About 25 sq. m of this surface area was aluminum 6061 T-6 exposed in every direction. In addition, about 17 sq. m of Scheldahl G411500 silver-Teflon thermal control blankets were exposed in 9 of the 12 directions. Since the LDEF was gravity gradient stabilized and did not rotate, the directional dependence of the flux can be easily distinguished. During the disintegration of the LDEF, all impact features larger than 0.5 mm into aluminum were documented for diameters and locations. In addition, the diameters and locations of all impact features larger than 0.3 mm into Scheldahl G411500 thermal control blankets were also documented. This data, along with additional information collected from LDEF materials will be compared with current meteoroid and debris models. This comparison will provide a validation of the models and will identify discrepancies between the models and the data.

  17. A quantitative risk assessment model for Salmonella and whole chickens.

    PubMed

    Oscar, Thomas P

    2004-06-01

    Existing data and predictive models were used to define the input settings of a previously developed but modified quantitative risk assessment model (QRAM) for Salmonella and whole chickens. The QRAM was constructed in an Excel spreadsheet and was simulated using @Risk. The retail-to-table pathway was modeled as a series of unit operations and associated pathogen events that included initial contamination at retail, growth during consumer transport, thermal inactivation during cooking, cross-contamination during serving, and dose response after consumption. Published data as well as predictive models for growth and thermal inactivation of Salmonella were used to establish input settings. Noncontaminated chickens were simulated so that the QRAM could predict changes in the incidence of Salmonella contamination. The incidence of Salmonella contamination changed from 30% at retail to 0.16% after cooking to 4% at consumption. Salmonella growth on chickens during consumer transport was the only pathogen event that did not impact the risk of salmonellosis. For the scenario simulated, the QRAM predicted 0.44 cases of salmonellosis per 100,000 consumers, which was consistent with recent epidemiological data that indicate a rate of 0.66-0.88 cases of salmonellosis per 100,000 consumers of chicken. Although the QRAM was in agreement with the epidemiological data, surrogate data and models were used, assumptions were made, and potentially important unit operations and pathogen events were not included because of data gaps and thus, further refinement of the QRAM is needed.

  18. Geysers of Enceladus: Quantitative analysis of qualitative models

    NASA Astrophysics Data System (ADS)

    Brilliantov, Nikolai V.; Schmidt, Jürgen; Spahn, Frank

    2008-11-01

    Aspects of two qualitative models of Enceladus' dust plume - the so-called "Cold Faithful" [Porco, C.C., et al., 2006. Cassini observes the active south pole of Enceladus. Science 311, 1393-1401; Ingersoll, A.P., et al., 2006. Models of the Enceladus plumes. In: Bulletin of the American Astronomical Society, vol. 38, p. 508] and "Frigid Faithful" [Kieffer, S.W., et al., 2006. A clathrate reservoir hypothesis for Enceladus' south polar plume. Science 314, 1764; Gioia, G., et al., 2007. Unified model of tectonics and heat transport in a Frigid Enceladus. Proc. Natl. Acad. Sci. 104, 13578-13591] models - are analyzed quantitatively. The former model assumes an explosive boiling of subsurface liquid water, when pressure exerted by the ice crust is suddenly released due to an opening crack. In the latter model the existence of a deep shell of clathrates below Enceladus' south pole is conjectured; clathrates can decompose explosively when exposed to vacuum through a fracture in the outer icy shell. For the Cold Faithful model we estimate the maximal velocity of ice grains, originating from water splashing in explosive boiling. We find that for water near the triple point this velocity is far too small to explain the observed plume properties. For the Frigid Faithful model we consider the problem of momentum transfer from gas to ice particles. It arises since any change in the direction of the gas flow in the cracks of the shell requires re-acceleration of the entrained grains. While this effect may explain the observed speed difference of gas and grains if the gas evaporates from triple point temperature (273.15 K) [Schmidt, J., et al., 2008. Formation of Enceladus dust plume. Nature 451, 685], the low temperatures of the Frigid Faithful model (˜140-170K) imply a too dilute vapor to support the observed high particle fluxes in Enceladus' plume.

  19. A Quantitative Model of Expert Transcription Typing

    DTIC Science & Technology

    1993-03-08

    monitoring the accuracy of the typing...the deterioration of typing rate that occurs as the text is modified from normal prose to non -language or random...letters...for] non -alphabetical keys. (p. 6) Rumelhart and Norman also do not attempt to make zero-parameter quantitative predictions of typing...Salthouse’s two-choice reaction time task was somewhat non - standard: Stimuli were uppercase and lowercase versions of the letters L and R, and responses

  20. The Impact of School Climate on Student Achievement in the Middle Schools of the Commonwealth of Virginia: A Quantitative Analysis of Existing Data

    ERIC Educational Resources Information Center

    Bergren, David Alexander

    2014-01-01

    This quantitative study was designed to be an analysis of the relationship between school climate and student achievement through the creation of an index of climate-factors (SES, discipline, attendance, and school size) for which publicly available data existed. The index that was formed served as a proxy measure of climate; it was analyzed…

  1. The Impact of School Climate on Student Achievement in the Middle Schools of the Commonwealth of Virginia: A Quantitative Analysis of Existing Data

    ERIC Educational Resources Information Center

    Bergren, David Alexander

    2014-01-01

    This quantitative study was designed to be an analysis of the relationship between school climate and student achievement through the creation of an index of climate-factors (SES, discipline, attendance, and school size) for which publicly available data existed. The index that was formed served as a proxy measure of climate; it was analyzed…

  2. A Quantitative Software Risk Assessment Model

    NASA Technical Reports Server (NTRS)

    Lee, Alice

    2002-01-01

    This slide presentation reviews a risk assessment model as applied to software development. the presentation uses graphs to demonstrate basic concepts of software reliability. It also discusses the application to the risk model to the software development life cycle.

  3. A quantitative model for designing keyboard layout.

    PubMed

    Shieh, K K; Lin, C C

    1999-02-01

    This study analyzed the quantitative relationship between keytapping times and ergonomic principles in typewriting skills. Keytapping times and key-operating characteristics of a female subject typing on the Qwerty and Dvorak keyboards for six weeks each were collected and analyzed. The results showed that characteristics of the typed material and the movements of hands and fingers were significantly related to keytapping times. The most significant factors affecting keytapping times were association frequency between letters, consecutive use of the same hand or finger, and the finger used. A regression equation for relating keytapping times to ergonomic principles was fitted to the data. Finally, a protocol for design of computerized keyboard layout based on the regression equation was proposed.

  4. A review: Quantitative models for lava flows on Mars

    NASA Technical Reports Server (NTRS)

    Baloga, S. M.

    1987-01-01

    The purpose of this abstract is to review and assess the application of quantitative models (Gratz numerical correlation model, radiative loss model, yield stress model, surface structure model, and kinematic wave model) of lava flows on Mars. These theoretical models were applied to Martian flow data to aid in establishing the composition of the lava or to determine other eruption conditions such as eruption rate or duration.

  5. What Are We Doing When We Translate from Quantitative Models?

    PubMed Central

    Critchfield, Thomas S; Reed, Derek D

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may convey concepts that are difficult to capture in words. To support this point, we provide a nontechnical introduction to selected aspects of quantitative analysis; consider some issues that translational investigators (and, potentially, practitioners) confront when attempting to translate from quantitative models; and discuss examples of relevant translational studies. We conclude that, where behavior-science translation is concerned, the quantitative features of quantitative models cannot be ignored without sacrificing conceptual precision, scientific and practical insights, and the capacity of the basic and applied wings of behavior analysis to communicate effectively. PMID:22478533

  6. What are we doing when we translate from quantitative models?

    PubMed

    Critchfield, Thomas S; Reed, Derek D

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may convey concepts that are difficult to capture in words. To support this point, we provide a nontechnical introduction to selected aspects of quantitative analysis; consider some issues that translational investigators (and, potentially, practitioners) confront when attempting to translate from quantitative models; and discuss examples of relevant translational studies. We conclude that, where behavior-science translation is concerned, the quantitative features of quantitative models cannot be ignored without sacrificing conceptual precision, scientific and practical insights, and the capacity of the basic and applied wings of behavior analysis to communicate effectively.

  7. A Transformative Model for Undergraduate Quantitative Biology Education

    ERIC Educational Resources Information Center

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…

  8. A Transformative Model for Undergraduate Quantitative Biology Education

    ERIC Educational Resources Information Center

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…

  9. Model-based drug development: the road to quantitative pharmacology.

    PubMed

    Zhang, Liping; Sinha, Vikram; Forgue, S Thomas; Callies, Sophie; Ni, Lan; Peck, Richard; Allerheiligen, Sandra R B

    2006-06-01

    High development costs and low success rates in bringing new medicines to the market demand more efficient and effective approaches. Identified by the FDA as a valuable prognostic tool for fulfilling such a demand, model-based drug development is a mathematical and statistical approach that constructs, validates, and utilizes disease models, drug exposure-response models, and pharmacometric models to facilitate drug development. Quantitative pharmacology is a discipline that learns and confirms the key characteristics of new molecular entities in a quantitative manner, with goal of providing explicit, reproducible, and predictive evidence for optimizing drug development plans and enabling critical decision making. Model-based drug development serves as an integral part of quantitative pharmacology. This work reviews the general concept, basic elements, and evolving role of model-based drug development in quantitative pharmacology. Two case studies are presented to illustrate how the model-based drug development approach can facilitate knowledge management and decision making during drug development. The case studies also highlight the organizational learning that comes through implementation of quantitative pharmacology as a discipline. Finally, the prospects of quantitative pharmacology as an emerging discipline are discussed. Advances in this discipline will require continued collaboration between academia, industry and regulatory agencies.

  10. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals.

    PubMed

    Gobas, Frank A P C; Burkhard, Lawrence P; Doucette, William J; Sappington, Keith G; Verbruggen, Eric M J; Hope, Bruce K; Bonnell, Mark A; Arnot, Jon A; Tarazona, Jose V

    2016-01-01

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This article reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, invertebrate, mammal, and avian species and for entire terrestrial food webs, including some that consider spatial factors. Limitations and gaps in terrestrial bioaccumulation modeling include the lack of QSARs for biotransformation and dietary assimilation efficiencies for terrestrial species; the lack of models and QSARs for important terrestrial species such as insects, amphibians and reptiles; the lack of standardized testing protocols for plants with limited development of plant models; and the limited chemical domain of existing bioaccumulation models and QSARs (e.g., primarily applicable to nonionic organic chemicals). There is an urgent need for high-quality field data sets for validating models and assessing their performance. There is a need to improve coordination among laboratory, field, and modeling efforts on bioaccumulative substances in order to improve the state of the science for challenging substances.

  11. Quantitative model of the Cerro Prieto field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-03-01

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  12. Quantitative Model of the Cerro Prieto Field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-01-21

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  13. Hazard Response Modeling Uncertainty (A Quantitative Method)

    DTIC Science & Technology

    1988-10-01

    Accidental Release Model (HARM) for application to accidental spills at Titan II sites. A rocket exhaust diffusion model developed by the H.E. Cramer...59 . .......... ignored. But Luna and Church (Reference 73), among others, show that the scatter in observed values of a- associated with each...No. 79, Atmospheric Turbulence and Diffusion Laboratory, 1973. 100 73. Luna , R.E., an H.W. Church, "A Comparison of Turbulence Intensity and Stability

  14. Quantitative description and modeling of real networks

    NASA Astrophysics Data System (ADS)

    Capocci, Andrea; Caldarelli, Guido; de Los Rios, Paolo

    2003-10-01

    We present data analysis and modeling of two particular cases of study in the field of growing networks. We analyze World Wide Web data set and authorship collaboration networks in order to check the presence of correlation in the data. The results are reproduced with good agreement through a suitable modification of the standard Albert-Barabási model of network growth. In particular, intrinsic relevance of sites plays a role in determining the future degree of the vertex.

  15. Quantitative magnetospheric models: results and perspectives.

    NASA Astrophysics Data System (ADS)

    Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team

    Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed

  16. Modeling with Young Students--Quantitative and Qualitative.

    ERIC Educational Resources Information Center

    Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis

    1999-01-01

    A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…

  17. Bayesian inverse modeling for quantitative precipitation estimation

    NASA Astrophysics Data System (ADS)

    Schinagl, Katharina; Rieger, Christian; Simmer, Clemens; Xie, Xinxin; Friederichs, Petra

    2017-04-01

    Polarimetric radars provide us with a richness of precipitation related measurements. Especially the high spatial and temporal resolution make the data an important information, e.g. for hydrological modeling. However, uncertainties in the precipitation estimates are large. Their systematic assessment and quantification is thus of great importance. Polarimetric radar observables like horizontal and vertical reflectivity ZH and ZV , cross-correlation coefficient ρHV and specific differential phase KDP are related to the drop size distribution (DSD) in the scan. This relation is described by forward operators which are integrals over the DSD and scattering terms. Given the polarimetric observables, the respective forward operators and assumptions about the measurement errors, we investigate the uncertainty in the DSD parameter estimation and based on it the uncertainty of precipitation estimates. We assume that the DSD follows a Gamma model, N(D) = N0Dμ exp(-ΛD), where all three parameters are variable. This model allows us to account for the high variability of the DSD. We employ the framework of Bayesian inverse methods to derive the posterior distribution of the DSD parameters. The inverse problem is investigated in a simulated environment (SE) using the COSMO-DE numerical weather prediction model. The advantage of the SE is that - unlike in a real world application - we know the parameters we want to estimate. Thus, building the inverse model into the SE gives us the opportunity of verifying our results against the COSMO-simulated DSD-values.

  18. Steady-state existence of passive vector fields under the Kraichnan model.

    PubMed

    Arponen, Heikki

    2010-03-01

    The steady-state existence problem for Kraichnan advected passive vector models is considered for isotropic and anisotropic initial values in arbitrary dimension. The models include the magnetohydrodynamic (MHD) equations, linear pressure model, and linearized Navier-Stokes (LNS) equations. In addition to reproducing the previously known results for the MHD model, we obtain the values of the Kraichnan model roughness parameter xi for which the LNS steady state exists.

  19. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  20. A Qualitative and Quantitative Evaluation of 8 Clear Sky Models.

    PubMed

    Bruneton, Eric

    2016-10-27

    We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy.

  1. Steps toward quantitative infrasound propagation modeling

    NASA Astrophysics Data System (ADS)

    Waxler, Roger; Assink, Jelle; Lalande, Jean-Marie; Velea, Doru

    2016-04-01

    Realistic propagation modeling requires propagation models capable of incorporating the relevant physical phenomena as well as sufficiently accurate atmospheric specifications. The wind speed and temperature gradients in the atmosphere provide multiple ducts in which low frequency sound, infrasound, can propagate efficiently. The winds in the atmosphere are quite variable, both temporally and spatially, causing the sound ducts to fluctuate. For ground to ground propagation the ducts can be borderline in that small perturbations can create or destroy a duct. In such cases the signal propagation is very sensitive to fluctuations in the wind, often producing highly dispersed signals. The accuracy of atmospheric specifications is constantly improving as sounding technology develops. There is, however, a disconnect between sound propagation and atmospheric specification in that atmospheric specifications are necessarily statistical in nature while sound propagates through a particular atmospheric state. In addition infrasonic signals can travel to great altitudes, on the order of 120 km, before refracting back to earth. At such altitudes the atmosphere becomes quite rare causing sound propagation to become highly non-linear and attenuating. Approaches to these problems will be presented.

  2. Existence of almost periodic solution of a model of phytoplankton allelopathy with delay

    NASA Astrophysics Data System (ADS)

    Abbas, Syed; Mahto, Lakshman

    2012-09-01

    In this paper we discuss a non-autonomous two species competitive allelopathic phytoplankton model in which both species are producing chemical which stimulate the growth of each other. We have studied the existence and uniqueness of an almost periodic solution for the concerned model system. Sufficient conditions are derived for the existence of a unique almost periodic solution.

  3. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  4. Relevance of MTF and NPS in quantitative CT: towards developing a predictable model of quantitative performance

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Richard, Samuel; Samei, Ehsan

    2012-03-01

    The quantification of lung nodule volume based on CT images provides valuable information for disease diagnosis and staging. However, the precision of the quantification is protocol, system, and technique dependent and needs to be evaluated for each specific case. To efficiently investigate the quantitative precision and find an optimal operating point, it is important to develop a predictive model based on basic system parameters. In this study, a Fourier-based metric, the estimability index (e') was proposed as such a predictor, and validated across a variety of imaging conditions. To first obtain the ground truth of quantitative precision, an anthropomorphic chest phantom with synthetic spherical nodules were imaged on a 64 slice CT scanner across a range of protocols (five exposure levels and two reconstruction algorithms). The volumes of nodules were quantified from the images using clinical software, with the precision of the quantification calculated for each protocol. To predict the precision, e' was calculated for each protocol based on several Fourier-based figures of merit, which modeled the characteristic of the quantitation task and the imaging condition (resolution, noise, etc.) of a particular protocol. Results showed a strong correlation (R2=0.92) between the measured and predicted precision across all protocols, indicating e' as an effective predictor of the quantitative precision. This study provides a useful framework for quantification-oriented optimization of CT protocols.

  5. Generalized PSF modeling for optimized quantitation in PET imaging

    NASA Astrophysics Data System (ADS)

    Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman

    2017-06-01

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

  6. Generalized PSF modeling for optimized quantitation in PET imaging.

    PubMed

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

  7. A community mental health service delivery model: integrating the evidence base within existing clinical models.

    PubMed

    Flannery, Frank; Adams, Danielle; O'Connor, Nick

    2011-02-01

    A model of care for community mental health services was developed by reviewing the available literature, surveying ?best practice? and evaluating the performance of existing services in a metropolitan area mental health service servicing a population of approximately 1.1 million people. A review of relevant academic literature and recognized ?good practice? service delivery models was undertaken in conjunction with a review of local activity data and consultation with key stakeholders (not addressed in this paper). A model was developed identifying the core functions of community mental health service delivery. The components of a comprehensive, integrated model of community mental health service (CMHS) are outlined. The essential components of a comprehensive, integrated model of CMHSs include: acute and emergency response, community continuing care services, assertive rehabilitation teams, partnerships with general practitioners and other human services agencies. We propose a comprehensive integrated model of community mental health service. Clarity of role, required outputs and expected outcomes will assist the development of effective and appropriate community mental health services. Outreach to the community is a key success factor for these services and their associated inpatient services. Gap analysis can assist in the planning and costing of community mental health services.

  8. Refining the quantitative pathway of the Pathways to Mathematics model.

    PubMed

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task.

  9. PeptideDepot: Flexible Relational Database for Visual Analysis of Quantitative Proteomic Data and Integration of Existing Protein Information

    PubMed Central

    Yu, Kebing; Salomon, Arthur R.

    2010-01-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through tandem mass spectrometry (MS/MS). Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to a variety of experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our High Throughput Autonomous Proteomic Pipeline (HTAPP) used in the automated acquisition and post-acquisition analysis of proteomic data. PMID:19834895

  10. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  11. Quantitative measurement and modeling of sensitization development in stainless steel

    SciTech Connect

    Bruemmer, S.M.; Atteridge, D.G.

    1992-09-01

    The state-of-the-art to quantitatively measure and model sensitization development in austenitic stainless steels is assessed and critically analyzed. A modeling capability is evolved and validated using a diverse experimental data base. Quantitative predictions are demonstrated for simple and complex thermal and thermomechanical treatments. Commercial stainless steel heats ranging from high-carbon Type 304 and 316 to low-carbon Type 304L and 316L have been examined including many heats which correspond to extra-low-carbon, nuclear-grade compositions. Within certain limits the electrochemical potentiokinetic reactivation (EPR) test was found to give accurate and reproducible measurements of the degree of sensitization (DOS) in Type 304 and 316 stainless steels. EPR test results are used to develop the quantitative data base and evolve/validate the quantitative modeling capability. This thesis represents a first step to evolve methods for the quantitative assessment of structural reliability in stainless steel components and weldments. Assessments will be based on component-specific information concerning material characteristics, fabrication history and service exposure. Methods will enable fabrication (e.g., welding and repair welding) procedures and material aging effects to be evaluated and ensure adequate cracking resistance during the service lifetime of reactor components. This work is being conducted by the Oregon Graduate Institute with interactive input from personnel at Pacific Northwest Laboratory.

  12. Designer substrate library for quantitative, predictive modeling of reaction performance

    PubMed Central

    Bess, Elizabeth N.; Bischoff, Amanda J.; Sigman, Matthew S.

    2014-01-01

    Assessment of reaction substrate scope is often a qualitative endeavor that provides general indications of substrate sensitivity to a measured reaction outcome. Unfortunately, this field standard typically falls short of enabling the quantitative prediction of new substrates’ performance. The disconnection between a reaction’s development and the quantitative prediction of new substrates’ behavior limits the applicative usefulness of many methodologies. Herein, we present a method by which substrate libraries can be systematically developed to enable quantitative modeling of reaction systems and the prediction of new reaction outcomes. Presented in the context of rhodium-catalyzed asymmetric transfer hydrogenation, these models quantify the molecular features that influence enantioselection and, in so doing, lend mechanistic insight to the modes of asymmetric induction. PMID:25267648

  13. Quantitative and qualitative comparison of a new prosthetic suspension system with two existing suspension systems for lower limb amputees.

    PubMed

    Eshraghi, Arezoo; Abu Osman, Noor Azuan; Karimi, Mohammad Taghi; Gholizadeh, Hossien; Ali, Sadeeq; Wan Abas, Wan Abu Bakar

    2012-12-01

    The objectives of this study were to compare the effects of a newly designed magnetic suspension system with that of two existing suspension methods on pistoning inside the prosthetic socket and to compare satisfaction and perceived problems among transtibial amputees. In this prospective study, three lower limb prostheses with three different suspension systems were fabricated for ten transtibial amputees. The participants used each of the three prostheses for 1 mo in random order. Pistoning inside the prosthetic socket was measured by motion analysis system. The Prosthesis Evaluation Questionnaire was used to evaluate satisfaction and perceived problems with each suspension system. The lowest pistoning motion was found with the suction system compared with the other two suspension systems (P < 0.05). The new suspension system showed peak pistoning values similar to that of the pin lock system (P = 0.086). The results of the questionnaire survey revealed significantly higher satisfaction rates with the new system than with the other two systems in donning and doffing, walking, uneven walking, stair negotiation, and overall satisfaction (P < 0.05). The new suspension system has the potential to be used as an alternative to the available suspension systems. The pistoning motion was comparable to that of the other two systems. The new system showed compatible prosthetic suspension with the other two systems (suction and pin lock). The satisfaction with donning and doffing was high with the magnetic system. Moreover, the subjects reported fewer problems with the new system.

  14. Existing Models of Maternal Death Surveillance Systems: Protocol for a Scoping Review

    PubMed Central

    Shahabuddin, ASM; Zhang, Wei Hong; Firoz, Tabassum; Englert, Yvon; Nejjari, Chakib; De Brouwere, Vincent

    2016-01-01

    Background Maternal mortality measurement remains a critical challenge, particularly in low and middle income countries (LMICs) where little or no data are available and maternal mortality and morbidity are often the highest in the world. Despite the progress made in data collection, underreporting and translating the results into action are two major challenges that maternal death surveillance systems (MDSSs) face in LMICs. Objective This paper presents a protocol for a scoping review aimed at synthesizing the existing models of MDSSs and factors that influence their completeness and usefulness. Methods The methodology for scoping reviews from the Joanna Briggs Institute was used as a guide for developing this protocol. A comprehensive literature search will be conducted across relevant electronic databases. We will include all articles that describe MDSSs or assess their completeness or usefulness. At least two reviewers will independently screen all articles, and discrepancies will be resolved through discussion. The same process will be used to extract data from studies fulfilling the eligibility criteria. Data analysis will involve quantitative and qualitative methods. Results Currently, the abstracts screening is under way and the first results are expected to be publicly available by mid-2017. The synthesis of the reviewed materials will be presented in tabular form completed by a narrative description. The results will be classified in main conceptual categories that will be obtained during the results extraction. Conclusions We anticipate that the results will provide a broad overview of MDSSs and describe factors related to their completeness and usefulness. The results will allow us to identify research gaps concerning the barriers and facilitating factors facing MDSSs. Results will be disseminated through publication in a peer-reviewed journal and conferences as well as domestic and international agencies in charge of implementing MDSS. PMID:27729305

  15. Autonomous Temperature Data Acquisition Compared to Existing Thermal Models of Different Sediments

    NASA Astrophysics Data System (ADS)

    Jackson, R. G.; Sanders, N. H.; Ward, C. C.; Ward, F. R.; Benson, S. M.; Lee, N. F.

    2010-03-01

    Our team wanted to improve sampling methods for experiments on the thermal properties of martian sediments. We built a robot that could take the data autonomously over a period of days, and then compared them to existing models.

  16. Towards Quantitative Systems Pharmacology Models of Chemotherapy‐Induced Neutropenia

    PubMed Central

    2017-01-01

    Neutropenia is a serious toxic complication of chemotherapeutic treatment. For years, mathematical models have been developed to better predict hematological outcomes during chemotherapy in both the traditional pharmaceutical sciences and mathematical biology disciplines. An increasing number of quantitative systems pharmacology (QSP) models that combine systems approaches, physiology, and pharmacokinetics/pharmacodynamics have been successfully developed. Here, I detail the shift towards QSP efforts, emphasizing the importance of incorporating systems‐level physiological considerations in pharmacometrics. PMID:28418603

  17. A GPGPU accelerated modeling environment for quantitatively characterizing karst systems

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Covington, M. D.; Luhmann, A. J.; Saar, M. O.

    2011-12-01

    The ability to derive quantitative information on the geometry of karst aquifer systems is highly desirable. Knowing the geometric makeup of a karst aquifer system enables quantitative characterization of the systems response to hydraulic events. However, the relationship between flow path geometry and karst aquifer response is not well understood. One method to improve this understanding is the use of high speed modeling environments. High speed modeling environments offer great potential in this regard as they allow researchers to improve their understanding of the modeled karst aquifer through fast quantitative characterization. To that end, we have implemented a finite difference model using General Purpose Graphics Processing Units (GPGPUs). GPGPUs are special purpose accelerators which are capable of high speed and highly parallel computation. The GPGPU architecture is a grid like structure, making it is a natural fit for structured systems like finite difference models. To characterize the highly complex nature of karst aquifer systems our modeling environment is designed to use an inverse method to conduct the parameter tuning. Using an inverse method reduces the total amount of parameter space needed to produce a set of parameters describing a system of good fit. Systems of good fit are determined with a comparison to reference storm responses. To obtain reference storm responses we have collected data from a series of data-loggers measuring water depth, temperature, and conductivity at locations along a cave stream with a known geometry in southeastern Minnesota. By comparing the modeled response to those of the reference responses the model parameters can be tuned to quantitatively characterize geometry, and thus, the response of the karst system.

  18. Quantitative and logic modelling of gene and molecular networks

    PubMed Central

    Le Novère, Nicolas

    2015-01-01

    Behaviours of complex biomolecular systems are often irreducible to the elementary properties of their individual components. Explanatory and predictive mathematical models are therefore useful for fully understanding and precisely engineering cellular functions. The development and analyses of these models require their adaptation to the problems that need to be solved and the type and amount of available genetic or molecular data. Quantitative and logic modelling are among the main methods currently used to model molecular and gene networks. Each approach comes with inherent advantages and weaknesses. Recent developments show that hybrid approaches will become essential for further progress in synthetic biology and in the development of virtual organisms. PMID:25645874

  19. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology

    PubMed Central

    Bachmann, Julie; Matteson, Andrew; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D.; Theis, Fabian J.; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642

  20. Lessons learned from quantitative dynamical modeling in systems biology.

    PubMed

    Raue, Andreas; Schilling, Marcel; Bachmann, Julie; Matteson, Andrew; Schelker, Max; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D; Theis, Fabian J; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here.

  1. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    SciTech Connect

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-07-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  2. Sensitivity, noise and quantitative model of Laser Speckle Contrast Imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai

    In the dissertation, I present several studies on Laser Speckle Contrast Imaging (LSCI). The two major goals of those studies are: (1) to improve the signal-noise-ratio (SNR) of LSCI so it can be used to detect small blood flow change due to brain activities; (2) to find a reliable quantitative model so LSCI results can be compared among experiments and subjects and even with results from other blood flow monitoring techniques. We sought to improve SNR in the following ways: (1) We investigated the relationship between exposure time and the sensitivities of LSCI. We found that relative sensitivity reaches its maximum at an exposure time of around 5 ms. (2) We studied the relationship between laser speckle and camera aperture stop, which is actually the relationship between laser speckle and speckle/pixel size ratio. In general, speckle and pixel size should be approximately 1.5 - 2 to reach the maximum of detection factor beta as well as speckle contrast (SC) value and absolute sensitivity. This is also an important study for quantitative model development. (3) We worked on noise analysis and modeling. Noise affects both SNR and quantitative model. Usually random noise is more critical for SNR analysis. The main random noises in LSCI are statistical noise and physiological noise. Some physiological noises are caused by the small motions induced by heart beat or breathing. These are periodic and can be eliminated using methods discussed in this dissertation. Statistical noise is more fundamental and cannot be eliminated entirely. However it can be greatly reduced by increasing the effective pixel number N for speckle contrast processing. To develop the quantitative model, we did the following: (1) We considered more experimental factors in the quantitative model and removed several ideal case assumptions. In particular, in our model we considered the general detection factor beta, static scatterers and systematic noise. A simple calibration procedure is suggested

  3. An evaluation of recent quantitative magnetospheric magnetic field models

    NASA Technical Reports Server (NTRS)

    Walker, R. J.

    1976-01-01

    Magnetospheric field models involving dipole tilt effects are discussed, with particular reference to defined magnetopause models and boundary surface models. The models are compared with observations and with each other whenever possible. It is shown that models containing only contributions from magnetopause and tail current systems are capable of reproducing the observed quiet time field just in a qualitative way. The best quantitative agreement between models and observations take place when currents distributed in the inner magnetosphere are added to the magnetopause and tail current systems. One region in which all the models fall short is the region around the polar cusp. Obtaining physically reasonable gradients should have high priority in the development of future models.

  4. Wave propagation models for quantitative defect detection by ultrasonic methods

    NASA Astrophysics Data System (ADS)

    Srivastava, Ankit; Bartoli, Ivan; Coccia, Stefano; Lanza di Scalea, Francesco

    2008-03-01

    Ultrasonic guided wave testing necessitates of quantitative, rather than qualitative, information on flaw size, shape and position. This quantitative diagnosis ability can be used to provide meaningful data to a prognosis algorithm for remaining life prediction, or simply to generate data sets for a statistical defect classification algorithm. Quantitative diagnostics needs models able to represent the interaction of guided waves with various defect scenarios. One such model is the Global-Local (GL) method, which uses a full finite element discretization of the region around a flaw to properly represent wave diffraction, and a suitable set of wave functions to simulate regions away from the flaw. Displacement and stress continuity conditions are imposed at the boundary between the global and the local regions. In this paper the GL method is expanded to take advantage of the Semi-Analytical Finite Element (SAFE) method in the global portion of the waveguide. The SAFE method is efficient because it only requires the discretization of the cross-section of the waveguide to obtain the wave dispersion solutions and it can handle complex structures such as multilayered sandwich panels. The GL method is applied to predicting quantitatively the interaction of guided waves with defects in aluminum and composites structural components.

  5. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code

    NASA Technical Reports Server (NTRS)

    Freeh, Josh

    2003-01-01

    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  6. On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.

    1981-01-01

    Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)

  7. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    PubMed

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  8. Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering

    DTIC Science & Technology

    2014-06-01

    MODELING LANGUAGES TO ALLOW QUANTITATIVE ANALYSIS FOR MODEL-BASED SYSTEMS ENGINEERING by Joseph W. Nutting June 2014 Thesis Advisor...3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXAMINATION OF MODELING LANGUAGES TO ALLOW QUANTITATIVE ANALYSIS FOR MODEL...engineering (MBSE) needs a formal language , one defined with explicit rules between its elements, in order to support the use of formal modeling in

  9. Quantitative Systems Pharmacology: A Case for Disease Models

    PubMed Central

    Ramanujan, S; Schmidt, BJ; Ghobrial, OG; Lu, J; Heatherington, AC

    2016-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model‐informed drug discovery and development, supporting program decisions from exploratory research through late‐stage clinical trials. In this commentary, we discuss the unique value of disease‐scale “platform” QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. PMID:27709613

  10. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  11. Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models

    NASA Technical Reports Server (NTRS)

    Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.

    1979-01-01

    The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.

  12. Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models

    NASA Technical Reports Server (NTRS)

    Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.

    1979-01-01

    The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.

  13. The methodology for the existing complex pneumatic systems efficiency increase with the use of mathematical modeling

    NASA Astrophysics Data System (ADS)

    Danilishin, A. M.; Kartashov, S. V.; Kozhukhov, Y. V.; Kozin, E. G.

    2017-08-01

    The method for the existing complex pneumatic systems efficiency increase has been developed, including the survey steps, mathematical technological process modeling, optimizing the pneumatic system configuration, its operation modes, selection of optimal compressor units and additional equipment. Practical application of the methodology is considered by the example of the existing pneumatic systems underground depot reconstruction. The first stage of the methodology is the survey of acting pneumatic system. The second stage of technique is multivariable mathematical modeling of the pneumatic system operation. The developed methodology is applicable to complex pneumatic systems.

  14. Analysis of Existing Hydrologic Models, Red River of the North Drainage Basin, North Dakota and Minnesota.

    DTIC Science & Technology

    1980-11-01

    Certain considerations in the depressional storage and drainage phase are available which act as a type of storage routing technique by virtue of the depth...AD-Al-35 697 ANALYSS SOF EXISTING HYDROL00IC MODELS RED RIVER OF THE I/ H MLAEEW NVR DAW 79CDNORTH DRAINAGE BASIN NORTH DAKOTA AND MINNESOTANU) CH2M...Corps of Engineers LO I Analysis of Existing Hydrologic Models, Red River of the North Drainage Basin North Dakota and Minnesota CH2M IHILL OEG3’U 6~A

  15. A framework for the merging of pre-existing and correspondenceless 3D statistical shape models.

    PubMed

    Pereañez, Marco; Lekadir, Karim; Butakoff, Constantine; Hoogendoorn, Corné; Frangi, Alejandro F

    2014-10-01

    The construction of statistical shape models (SSMs) that are rich, i.e., that represent well the natural and complex variability of anatomical structures, is an important research topic in medical imaging. To this end, existing works have addressed the limited availability of training data by decomposing the shape variability hierarchically or by combining statistical and synthetic models built using artificially created modes of variation. In this paper, we present instead a method that merges multiple statistical models of 3D shapes into a single integrated model, thus effectively encoding extra variability that is anatomically meaningful, without the need for the original or new real datasets. The proposed framework has great flexibility due to its ability to merge multiple statistical models with unknown point correspondences. The approach is beneficial in order to re-use and complement pre-existing SSMs when the original raw data cannot be exchanged due to ethical, legal, or practical reasons. To this end, this paper describes two main stages, i.e., (1) statistical model normalization and (2) statistical model integration. The normalization algorithm uses surface-based registration to bring the input models into a common shape parameterization with point correspondence established across eigenspaces. This allows the model fusion algorithm to be applied in a coherent manner across models, with the aim to obtain a single unified statistical model of shape with improved generalization ability. The framework is validated with statistical models of the left and right cardiac ventricles, the L1 vertebra, and the caudate nucleus, constructed at distinct research centers based on different imaging modalities (CT and MRI) and point correspondences. The results demonstrate that the model integration is statistically and anatomically meaningful, with potential value for merging pre-existing multi-modality statistical models of 3D shapes. Copyright © 2014 Elsevier B

  16. Quantitative modeling of the physiology of ascites in portal hypertension

    PubMed Central

    2012-01-01

    Although the factors involved in cirrhotic ascites have been studied for a century, a number of observations are not understood, including the action of diuretics in the treatment of ascites and the ability of the plasma-ascitic albumin gradient to diagnose portal hypertension. This communication presents an explanation of ascites based solely on pathophysiological alterations within the peritoneal cavity. A quantitative model is described based on experimental vascular and intraperitoneal pressures, lymph flow, and peritoneal space compliance. The model's predictions accurately mimic clinical observations in ascites, including the magnitude and time course of changes observed following paracentesis or diuretic therapy. PMID:22453061

  17. Quantitative phase-field modeling of dendritic electrodeposition

    NASA Astrophysics Data System (ADS)

    Cogswell, Daniel A.

    2015-07-01

    A thin-interface phase-field model of electrochemical interfaces is developed based on Marcus kinetics for concentrated solutions, and used to simulate dendrite growth during electrodeposition of metals. The model is derived in the grand electrochemical potential to permit the interface to be widened to reach experimental length and time scales, and electroneutrality is formulated to eliminate the Debye length. Quantitative agreement is achieved with zinc Faradaic reaction kinetics, fractal growth dimension, tip velocity, and radius of curvature. Reducing the exchange current density is found to suppress the growth of dendrites, and screening electrolytes by their exchange currents is suggested as a strategy for controlling dendrite growth in batteries.

  18. Quantitative magnetospheric models derived from spacecraft magnetometer data

    NASA Technical Reports Server (NTRS)

    Mead, G. D.; Fairfield, D. H.

    1973-01-01

    Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.

  19. Discrete modeling of hydraulic fracturing processes in a complex pre-existing fracture network

    NASA Astrophysics Data System (ADS)

    Kim, K.; Rutqvist, J.; Nakagawa, S.; Houseworth, J. E.; Birkholzer, J. T.

    2015-12-01

    Hydraulic fracturing and stimulation of fracture networks are widely used by the energy industry (e.g., shale gas extraction, enhanced geothermal systems) to increase permeability of geological formations. Numerous analytical and numerical models have been developed to help understand and predict the behavior of hydraulically induced fractures. However, many existing models assume simple fracturing scenarios with highly idealized fracture geometries (e.g., propagation of a single fracture with assumed shapes in a homogeneous medium). Modeling hydraulic fracture propagation in the presence of natural fractures and homogeneities can be very challenging because of the complex interactions between fluid, rock matrix, and rock interfaces, as well as the interactions between propagating fractures and pre-existing natural fractures. In this study, the TOUGH-RBSN code for coupled hydro-mechanical modeling is utilized to simulate hydraulic fracture propagation and its interaction with pre-existing fracture networks. The simulation tool combines TOUGH2, a simulator of subsurface multiphase flow and mass transport based on the finite volume approach, with the implementation of a lattice modeling approach for geomechanical and fracture-damage behavior, named Rigid-Body-Spring Network (RBSN). The discrete fracture network (DFN) approach is facilitated in the Voronoi discretization via a fully automated modeling procedure. The numerical program is verified through a simple simulation for single fracture propagation, in which the resulting fracture geometry is compared to an analytical solution for given fracture length and aperture. Subsequently, predictive simulations are conducted for planned laboratory experiments using rock-analogue (soda-lime glass) samples containing a designed, pre-existing fracture network. The results of a preliminary simulation demonstrate selective fracturing and fluid infiltration along the pre-existing fractures, with additional fracturing in part

  20. Possibility of quantitative prediction of cavitation erosion without model test

    SciTech Connect

    Kato, Hiroharu; Konno, Akihisa; Maeda, Masatsugu; Yamaguchi, Hajime

    1996-09-01

    A scenario for quantitative prediction of cavitation erosion was proposed. The key value is the impact force/pressure spectrum on a solid surface caused by cavitation bubble collapse. As the first step of prediction, the authors constructed the scenario from an estimation of the cavity generation rate to the prediction of impact force spectrum, including the estimations of collapsing cavity number and impact pressure. The prediction was compared with measurements of impact force spectra on a partially cavitating hydrofoil. A good quantitative agreement was obtained between the prediction and the experiment. However, the present method predicted a larger effect of main flow velocity than that observed. The present scenario is promising as a method of predicting erosion without using a model test.

  1. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    PubMed Central

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  2. Existence and uniqueness of solutions from the LEAP equilibrium energy-economy model

    SciTech Connect

    Oblow, E.M.

    1982-10-01

    A study was made of the existence and uniqueness of solutions to the long-range, energy-economy model LEAP. The code is a large scale, long-range (50 year) equilibrium model of energy supply and demand in the US economy used for government and industrial forecasting. The study focused on the two features which distinguish LEAP from other equilibrium models - the treatment of product allocation and basic conversion of materials into an energy end product. Both allocation and conversion processes are modeled in a behavioral fashion which differs from classical economic paradigms. The results of the study indicate that while LEAP contains desirable behavioral features, these same features can give rise to non-uniqueness in the solution of allocation and conversion process equations. Conditions under which existence and uniqueness of solutions might not occur are developed in detail and their impact in practical applications are discussed.

  3. Existence of periodic solutions for a periodic mutualism model on time scales

    NASA Astrophysics Data System (ADS)

    Li, Yongkun; Zhang, Hongtao

    2008-07-01

    By using Mawhin's continuation theorem of coincidence degree theory, sufficient criteria are obtained for the existence of periodic solutions of the mutualism model where , [alpha]i>Ki, i=1,2, , i=1,2, ri, Ki, [alpha]i, [tau]i, [sigma]i (i=1,2) are functions of period [omega]>0.

  4. Existence of global weak solution for a reduced gravity two and a half layer model

    SciTech Connect

    Guo, Zhenhua Li, Zilai Yao, Lei

    2013-12-15

    We investigate the existence of global weak solution to a reduced gravity two and a half layer model in one-dimensional bounded spatial domain or periodic domain. Also, we show that any possible vacuum state has to vanish within finite time, then the weak solution becomes a unique strong one.

  5. Existence of standard models of conic fibrations over non-algebraically-closed fields

    SciTech Connect

    Avilov, A A

    2014-12-31

    We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.

  6. The existence of Newtonian analogs of a class of 5D Wesson's cosmological models

    NASA Astrophysics Data System (ADS)

    Waga, I.

    1992-07-01

    The conditions for the existence of Newtonian analogs of a five dimensional (5D) generalization of the Friedman-Robertson-Walker (FRW) cosmological models in Wesson's gravitational theory are re-analyzed. Contrarily to other claims, we show that classical analogs can be obtained for non-null cosmological constant and negative or null spatial curvature.

  7. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  8. Existence of limit cycles in the Solow model with delayed-logistic population growth.

    PubMed

    Bianca, Carlo; Guerrini, Luca

    2014-01-01

    This paper is devoted to the existence and stability analysis of limit cycles in a delayed mathematical model for the economy growth. Specifically the Solow model is further improved by inserting the time delay into the logistic population growth rate. Moreover, by choosing the time delay as a bifurcation parameter, we prove that the system loses its stability and a Hopf bifurcation occurs when time delay passes through critical values. Finally, numerical simulations are carried out for supporting the analytical results.

  9. Leveraging an existing data warehouse to annotate workflow models for operations research and optimization.

    PubMed

    Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O

    2008-11-06

    Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse.

  10. Mentoring for junior medical faculty: Existing models and suggestions for low-resource settings.

    PubMed

    Menon, Vikas; Muraleedharan, Aparna; Bhat, Ballambhattu Vishnu

    2016-02-01

    Globally, there is increasing recognition about the positive benefits and impact of mentoring on faculty retention rates, career satisfaction and scholarly output. However, emphasis on research and practice of mentoring is comparatively meagre in low and middle income countries. In this commentary, we critically examine two existing models of mentorship for medical faculty and offer few suggestions for an integrated hybrid model that can be adapted for use in low resource settings. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Existence of Limit Cycles in the Solow Model with Delayed-Logistic Population Growth

    PubMed Central

    2014-01-01

    This paper is devoted to the existence and stability analysis of limit cycles in a delayed mathematical model for the economy growth. Specifically the Solow model is further improved by inserting the time delay into the logistic population growth rate. Moreover, by choosing the time delay as a bifurcation parameter, we prove that the system loses its stability and a Hopf bifurcation occurs when time delay passes through critical values. Finally, numerical simulations are carried out for supporting the analytical results. PMID:24592147

  12. Model selection for quantitative trait locus analysis in polyploids.

    PubMed

    Doerge, R W; Craig, B A

    2000-07-05

    Over the years, substantial gains have been made in locating regions of agricultural genomes associated with characteristics, diseases, and agroeconomic traits. These gains have relied heavily on the ability to statistically estimate the association between DNA markers and regions of a genome (quantitative trait loci or QTL) related to a particular trait. The majority of these advances have focused on diploid species, even though many important agricultural crops are, in fact, polyploid. The purpose of our work is to initiate an algorithmic approach for model selection and QTL detection in polyploid species. This approach involves the construction of all possible chromosomal configurations (models) that may result in a gamete, model reduction based on estimation of marker dosage from progeny data, and lastly model selection. While simplified for initial explanation, our approach has demonstrated itself to be extendible to many breeding schemes and less restricted settings.

  13. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    NASA Technical Reports Server (NTRS)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  14. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study.

  15. Functional Linear Models for Association Analysis of Quantitative Traits

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Mills, James L.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao

    2014-01-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. PMID:24130119

  16. Three models intercomparison for Quantitative Precipitation Forecast over Calabria

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Lavagnini, A.; Accadia, C.; Mariani, S.; Casaioli, M.

    2004-11-01

    In the framework of the National Project “Sviluppo di distretti industriali per le Osservazioni della Terra” (Development of Industrial Districts for Earth Observations) funded by MIUR (Ministero dell'Università e della Ricerca Scientifica --Italian Ministry of the University and Scientific Research) two operational mesoscale models were set-up for Calabria, the southernmost tip of the Italian peninsula. Models are RAMS (Regional Atmospheric Modeling System) and MM5 (Mesoscale Modeling 5) that are run every day at Crati scrl to produce weather forecast over Calabria (http://www.crati.it). This paper reports model intercomparison for Quantitative Precipitation Forecast evaluated for a 20 month period from 1th October 2000 to 31th May 2002. In addition to RAMS and MM5 outputs, QBOLAM rainfall fields are available for the period selected and included in the comparison. This model runs operationally at “Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici”. Forecasts are verified comparing models outputs with raingauge data recorded by the regional meteorological network, which has 75 raingauges. Large-scale forcing is the same for all models considered and differences are due to physical/numerical parameterizations and horizontal resolutions. QPFs show differences between models. Largest differences are for BIA compared to the other considered scores. Performances decrease with increasing forecast time for RAMS and MM5, whilst QBOLAM scores better for second day forecast.

  17. Existence of Torsional Solitons in a Beam Model of Suspension Bridge

    NASA Astrophysics Data System (ADS)

    Benci, Vieri; Fortunato, Donato; Gazzola, Filippo

    2017-06-01

    This paper studies the existence of solitons, namely stable solitary waves, in an idealized suspension bridge. The bridge is modeled as an unbounded degenerate plate, that is, a central beam with cross sections, and displays two degrees of freedom: the vertical displacement of the beam and the torsional angles of the cross sections. Under fairly general assumptions, we prove the existence of solitons. Under the additional assumption of large tension in the sustaining cables, we prove that these solitons have a nontrivial torsional component. This appears relevant for security since several suspension bridges collapsed due to torsional oscillations.

  18. Quantitative Modeling of Growth and Dispersal in Population Models.

    DTIC Science & Technology

    1986-01-01

    partial differential equations. Applications to dispersal and nonlinear growth/predation models arc dnsity- depresented . Computational iresults using...depend only on size x. The ideas we present here can be readily modified to treat theoretically and computationally the more general case where g and m

  19. A Team Mental Model Perspective of Pre-Quantitative Risk

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  20. Prediction of photoperiodic regulators from quantitative gene circuit models.

    PubMed

    Salazar, José Domingo; Saithong, Treenut; Brown, Paul E; Foreman, Julia; Locke, James C W; Halliday, Karen J; Carré, Isabelle A; Rand, David A; Millar, Andrew J

    2009-12-11

    Photoperiod sensors allow physiological adaptation to the changing seasons. The prevalent hypothesis is that day length perception is mediated through coupling of an endogenous rhythm with an external light signal. Sufficient molecular data are available to test this quantitatively in plants, though not yet in mammals. In Arabidopsis, the clock-regulated genes CONSTANS (CO) and FLAVIN, KELCH, F-BOX (FKF1) and their light-sensitive proteins are thought to form an external coincidence sensor. Here, we model the integration of light and timing information by CO, its target gene FLOWERING LOCUS T (FT), and the circadian clock. Among other predictions, our models show that FKF1 activates FT. We demonstrate experimentally that this effect is independent of the known activation of CO by FKF1, thus we locate a major, novel controller of photoperiodism. External coincidence is part of a complex photoperiod sensor: modeling makes this complexity explicit and may thus contribute to crop improvement.

  1. A Team Mental Model Perspective of Pre-Quantitative Risk

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  2. Quantitative modeling of soil sorption for xenobiotic chemicals.

    PubMed Central

    Sabljić, A

    1989-01-01

    Experimentally determining soil sorption behavior of xenobiotic chemicals during the last 10 years has been costly, time-consuming, and very tedious. Since an estimated 100,000 chemicals are currently in common use and new chemicals are registered at a rate of 1000 per year, it is obvious that our human and material resources are insufficient to experimentally obtain their soil sorption data. Much work is being done to find alternative methods that will enable us to accurately and rapidly estimate the soil sorption coefficients of pesticides and other classes of organic pollutants. Empirical models, based on water solubility and n-octanol/water partition coefficients, have been proposed as alternative, accurate methods to estimate soil sorption coefficients. An analysis of the models has shown (a) low precision of water solubility and n-octanol/water partition data, (b) varieties of quantitative models describing the relationship between the soil sorption and above-mentioned properties, and (c) violations of some basic statistical laws when these quantitative models were developed. During the last 5 years considerable efforts were made to develop nonempirical models that are free of errors imminent to all models based on empirical variables. Thus far molecular topology has been shown to be the most successful structural property for describing and predicting soil sorption coefficients. The first-order molecular connectivity index was demonstrated to correlate extremely well with the soil sorption coefficients of polycyclic aromatic hydrocarbons (PAHs), alkylbenzenes, chlorobenzenes, chlorinated alkanes and alkenes, heterocyclic and heterosubstituted PAHs, and halogenated phenols. The average difference between predicted and observed soil sorption coefficients is only 0.2 on the logarithmic scale (corresponding to a factor of 1.5). A comparison of the molecular connectivity model with the empirical models described earlier shows that the former is superior in

  3. A study about the existence of the leverage effect in stochastic volatility models

    NASA Astrophysics Data System (ADS)

    Florescu, Ionuţ; Pãsãricã, Cristian Gabriel

    2009-02-01

    The empirical relationship between the return of an asset and the volatility of the asset has been well documented in the financial literature. Named the leverage effect or sometimes risk-premium effect, it is observed in real data that, when the return of the asset decreases, the volatility increases and vice versa. Consequently, it is important to demonstrate that any formulated model for the asset price is capable of generating this effect observed in practice. Furthermore, we need to understand the conditions on the parameters present in the model that guarantee the apparition of the leverage effect. In this paper we analyze two general specifications of stochastic volatility models and their capability of generating the perceived leverage effect. We derive conditions for the apparition of leverage effect in both of these stochastic volatility models. We exemplify using stochastic volatility models used in practice and we explicitly state the conditions for the existence of the leverage effect in these examples.

  4. Quantitative determination of guggulsterone in existing natural populations of Commiphora wightii (Arn.) Bhandari for identification of germplasm having higher guggulsterone content.

    PubMed

    Kulhari, Alpana; Sheorayan, Arun; Chaudhury, Ashok; Sarkar, Susheel; Kalia, Rajwant K

    2015-01-01

    Guggulsterone is an aromatic steroidal ketonic compound obtained from vertical rein ducts and canals of bark of Commiphora wightii (Arn.) Bhandari (Family - Burseraceae). Owing to its multifarious medicinal and therapeutic values as well as its various other significant bioactivities, guggulsterone has high demand in pharmaceutical, perfumery and incense industries. More and more pharmaceutical and perfumery industries are showing interest in guggulsterone, therefore, there is a need for its quantitative determination in existing natural populations of C. wightii. Identification of elite germplasm having higher guggulsterone content can be multiplied through conventional or biotechnological means. In the present study an effort was made to estimate two isoforms of guggulsterone i.e. E and Z guggulsterone in raw exudates of 75 accessions of C. wightii collected from three states of North-western India viz. Rajasthan (19 districts), Haryana (4 districts) and Gujarat (3 districts). Extracted steroid rich fraction from stem samples was fractionated using reverse-phase preparative High Performance Liquid Chromatography (HPLC) coupled with UV/VIS detector operating at wavelength of 250 nm. HPLC analysis of stem samples of wild as well as cultivated plants showed that the concentration of E and Z isomers as well as total guggulsterone was highest in Rajasthan, as compared to Haryana and Gujarat states. Highest concentration of E guggulsterone (487.45 μg/g) and Z guggulsterone (487.68 μg/g) was found in samples collected from Devikot (Jaisalmer) and Palana (Bikaner) respectively, the two hyper-arid regions of Rajasthan, India. Quantitative assay was presented on the basis of calibration curve obtained from a mixture of standard E and Z guggulsterones with different validatory parameters including linearity, selectivity and specificity, accuracy, auto-injector, flow-rate, recoveries, limit of detection and limit of quantification (as per norms of International

  5. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    PubMed Central

    2012-01-01

    Background Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give

  6. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    NASA Astrophysics Data System (ADS)

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-01

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model [1], using the finite-difference electromagnetic Vorpal/Vsim software [2]. This model has been augmented with a non-linear rf-sheath sub-grid model [3], which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves [4] in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques [5] to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion

  7. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    SciTech Connect

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-12

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model, using the finite-difference electromagnetic Vorpal/Vsim software. This model has been augmented with a non-linear rf-sheath sub-grid model, which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion, existing at the

  8. A Systems Perspective on Situation Awareness I: Conceptual Framework, Modeling, and Quantitative Measurement

    DTIC Science & Technology

    2003-05-01

    A Systems Perspective on Situation Awareness I: Conceptual Framework , Modeling, and Quantitative Measurement Alex Kirlik (University of...I: Conceptual Framework , Modeling, and Quantitative Measurement 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Systems Perspective on Situation Awareness I: Conceptual Framework , Modeling, and Quantitative Measurement ALEX KIRLIK Institute of Aviation

  9. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  10. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  11. Quantitative model studies for interfaces in organic electronic devices

    NASA Astrophysics Data System (ADS)

    Gottfried, J. Michael

    2016-11-01

    In organic light-emitting diodes and similar devices, organic semiconductors are typically contacted by metal electrodes. Because the resulting metal/organic interfaces have a large impact on the performance of these devices, their quantitative understanding is indispensable for the further rational development of organic electronics. A study by Kröger et al (2016 New J. Phys. 18 113022) of an important single-crystal based model interface provides detailed insight into its geometric and electronic structure and delivers valuable benchmark data for computational studies. In view of the differences between typical surface-science model systems and real devices, a ‘materials gap’ is identified that needs to be addressed by future research to make the knowledge obtained from fundamental studies even more beneficial for real-world applications.

  12. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  13. Normal fault growth above pre-existing structures: insights from discrete element modelling

    NASA Astrophysics Data System (ADS)

    Wrona, Thilo; Finch, Emma; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Phillips, Thomas

    2016-04-01

    In extensional systems, pre-existing structures such as shear zones may affect the growth, geometry and location of normal faults. Recent seismic reflection-based observations from the North Sea suggest that shear zones not only localise deformation in the host rock, but also in the overlying sedimentary succession. While pre-existing weaknesses are known to localise deformation in the host rock, their effect on deformation in the overlying succession is less well understood. Here, we use 3-D discrete element modelling to determine if and how kilometre-scale shear zones affect normal fault growth in the overlying succession. Discrete element models use a large number of interacting particles to describe the dynamic evolution of complex systems. The technique has therefore been applied to describe fault and fracture growth in a variety of geological settings. We model normal faulting by extending a 60×60×30 km crustal rift-basin model including brittle and ductile interactions and gravitation and isostatic forces by 30%. An inclined plane of weakness which represents a pre-existing shear zone is introduced in the lower section of the upper brittle layer at the start of the experiment. The length, width, orientation and dip of the weak zone are systematically varied between experiments to test how these parameters control the geometric and kinematic development of overlying normal fault systems. Consistent with our seismic reflection-based observations, our results show that strain is indeed localised in and above these weak zones. In the lower brittle layer, normal faults nucleate, as expected, within the zone of weakness and control the initiation and propagation of neighbouring faults. Above this, normal faults nucleate throughout the overlying strata where their orientations are strongly influenced by the underlying zone of weakness. These results challenge the notion that overburden normal faults simply form due to reactivation and upwards propagation of pre-existing

  14. Scalar conservation laws with moving constraints arising in traffic flow modeling: An existence result

    NASA Astrophysics Data System (ADS)

    Delle Monache, M. L.; Goatin, P.

    2014-12-01

    We consider a strongly coupled PDE-ODE system that describes the influence of a slow and large vehicle on road traffic. The model consists of a scalar conservation law accounting for the main traffic evolution, while the trajectory of the slower vehicle is given by an ODE depending on the downstream traffic density. The moving constraint is expressed by an inequality on the flux, which models the bottleneck created in the road by the presence of the slower vehicle. We prove the existence of solutions to the Cauchy problem for initial data of bounded variation.

  15. A short time existence/uniqueness result for a nonlocal topology-preserving segmentation model

    NASA Astrophysics Data System (ADS)

    Forcadel, Nicolas; Le Guyader, Carole

    Motivated by a prior applied work of Vese and the second author dedicated to segmentation under topological constraints, we derive a slightly modified model phrased as a functional minimization problem, and propose to study it from a theoretical viewpoint. The mathematical model leads to a second order nonlinear PDE with a singularity at Du=0 and containing a nonlocal term. A suitable setting is thus the one of the viscosity solution theory and, in this framework, we establish a short time existence/uniqueness result as well as a Lipschitz regularity result for the solution.

  16. Existing General Population Models Inaccurately Predict Lung Cancer Risk in Patients Referred for Surgical Evaluation

    PubMed Central

    Isbell, James M.; Deppen, Stephen; Putnam, Joe B.; Nesbitt, Jonathan C.; Lambright, Eric S.; Dawes, Aaron; Massion, Pierre P.; Speroff, Theodore; Jones, David R.; Grogan, Eric L.

    2013-01-01

    Background atients undergoing resections for suspicious pulmonary lesions have a 9-55% benign rate. Validated prediction models exist to estimate the probability of malignancy in a general population and current practice guidelines recommend their use. We evaluated these models in a surgical population to determine the accuracy of existing models to predict benign or malignant disease. Methods We conducted a retrospective review of our thoracic surgery quality improvement database (2005-2008) to identify patients who underwent resection of a pulmonary lesion. Patients were stratified into subgroups based on age, smoking status and fluorodeoxyglucose positron emission tomography (PET) results. The probability of malignancy was calculated for each patient using the Mayo and SPN prediction models. Receiver operating characteristic (ROC) and calibration curves were used to measure model performance. Results 89 patients met selection criteria; 73% were malignant. Patients with preoperative PET scans were divided into 4 subgroups based on age, smoking history and nodule PET avidity. Older smokers with PET-avid lesions had a 90% malignancy rate. Patients with PET- non-avid lesions, or PET-avid lesions with age<50 years or never smokers of any age had a 62% malignancy rate. The area under the ROC curve for the Mayo and SPN models was 0.79 and 0.80, respectively; however, the models were poorly calibrated (p<0.001). Conclusions Despite improvements in diagnostic and imaging techniques, current general population models do not accurately predict lung cancer among patients ref erred for surgical evaluation. Prediction models with greater accuracy are needed to identify patients with benign disease to reduce non-therapeutic resections. PMID:21172518

  17. Existing general population models inaccurately predict lung cancer risk in patients referred for surgical evaluation.

    PubMed

    Isbell, James M; Deppen, Stephen; Putnam, Joe B; Nesbitt, Jonathan C; Lambright, Eric S; Dawes, Aaron; Massion, Pierre P; Speroff, Theodore; Jones, David R; Grogan, Eric L

    2011-01-01

    Patients undergoing resections for suspicious pulmonary lesions have a 9% to 55% benign rate. Validated prediction models exist to estimate the probability of malignancy in a general population and current practice guidelines recommend their use. We evaluated these models in a surgical population to determine the accuracy of existing models to predict benign or malignant disease. We conducted a retrospective review of our thoracic surgery quality improvement database (2005 to 2008) to identify patients who underwent resection of a pulmonary lesion. Patients were stratified into subgroups based on age, smoking status, and fluorodeoxyglucose positron emission tomography (PET) results. The probability of malignancy was calculated for each patient using the Mayo and solitary pulmonary nodules prediction models. Receiver operating characteristic and calibration curves were used to measure model performance. A total of 189 patients met selection criteria; 73% were malignant. Patients with preoperative PET scans were divided into four subgroups based on age, smoking history, and nodule PET avidity. Older smokers with PET-avid lesions had a 90% malignancy rate. Patients with PET-nonavid lesions, PET-avid lesions with age less than 50 years, or never smokers of any age had a 62% malignancy rate. The area under the receiver operating characteristic curve for the Mayo and solitary pulmonary nodules models was 0.79 and 0.80, respectively; however, the models were poorly calibrated (p<0.001). Despite improvements in diagnostic and imaging techniques, current general population models do not accurately predict lung cancer among patients referred for surgical evaluation. Prediction models with greater accuracy are needed to identify patients with benign disease to reduce nontherapeutic resections. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Variability of model-free and model-based quantitative measures of EEG.

    PubMed

    Van Albada, Sacha J; Rennie, Christopher J; Robinson, Peter A

    2007-06-01

    Variable contributions of state and trait to the electroencephalographic (EEG) signal affect the stability over time of EEG measures, quite apart from other experimental uncertainties. The extent of intraindividual and interindividual variability is an important factor in determining the statistical, and hence possibly clinical significance of observed differences in the EEG. This study investigates the changes in classical quantitative EEG (qEEG) measures, as well as of parameters obtained by fitting frequency spectra to an existing continuum model of brain electrical activity. These parameters may have extra variability due to model selection and fitting. Besides estimating the levels of intraindividual and interindividual variability, we determined approximate time scales for change in qEEG measures and model parameters. This provides an estimate of the recording length needed to capture a given percentage of the total intraindividual variability. Also, if more precise time scales can be obtained in future, these may aid the characterization of physiological processes underlying various EEG measures. Heterogeneity of the subject group was constrained by testing only healthy males in a narrow age range (mean = 22.3 years, sd = 2.7). Eyes-closed EEGs of 32 subjects were recorded at weekly intervals over an approximately six-week period, of which 13 subjects were followed for a year. QEEG measures, computed from Cz spectra, were powers in five frequency bands, alpha peak frequency, and spectral entropy. Of these, theta, alpha, and beta band powers were most reproducible. Of the nine model parameters obtained by fitting model predictions to experiment, the most reproducible ones quantified the total power and the time delay between cortex and thalamus. About 95% of the maximum change in spectral parameters was reached within minutes of recording time, implying that repeat recordings are not necessary to capture the bulk of the variability in EEG spectra.

  19. An overview of existing modeling tools making use of model checking in the analysis of biochemical networks

    PubMed Central

    Carrillo, Miguel; Góngora, Pedro A.; Rosenblueth, David A.

    2012-01-01

    Model checking is a well-established technique for automatically verifying complex systems. Recently, model checkers have appeared in computer tools for the analysis of biochemical (and gene regulatory) networks. We survey several such tools to assess the potential of model checking in computational biology. Next, our overview focuses on direct applications of existing model checkers, as well as on algorithms for biochemical network analysis influenced by model checking, such as those using binary decision diagrams (BDDs) or Boolean-satisfiability solvers. We conclude with advantages and drawbacks of model checking for the analysis of biochemical networks. PMID:22833747

  20. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. A review of existing models and methods to estimate employment effects of pollution control policies

    SciTech Connect

    Darwin, R.F.; Nesse, R.J.

    1988-02-01

    The purpose of this paper is to provide information about existing models and methods used to estimate coal mining employment impacts of pollution control policies. The EPA is currently assessing the consequences of various alternative policies to reduce air pollution. One important potential consequence of these policies is that coal mining employment may decline or shift from low-sulfur to high-sulfur coal producing regions. The EPA requires models that can estimate the magnitude and cost of these employment changes at the local level. This paper contains descriptions and evaluations of three models and methods currently used to estimate the size and cost of coal mining employment changes. The first model reviewed is the Coal and Electric Utilities Model (CEUM), a well established, general purpose model that has been used by the EPA and other groups to simulate air pollution control policies. The second model reviewed is the Advanced Utility Simulation Model (AUSM), which was developed for the EPA specifically to analyze the impacts of air pollution control policies. Finally, the methodology used by Arthur D. Little, Inc. to estimate the costs of alternative air pollution control policies for the Consolidated Coal Company is discussed. These descriptions and evaluations are based on information obtained from published reports and from draft documentation of the models provided by the EPA. 12 refs., 1 fig.

  2. Global existence of solutions to a tear film model with locally elevated evaporation rates

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Ji, Hangjie; Liu, Jian-Guo; Witelski, Thomas P.

    2017-07-01

    Motivated by a model proposed by Peng et al. (2014) for break-up of tear films on human eyes, we study the dynamics of a generalized thin film model. The governing equations form a fourth-order coupled system of nonlinear parabolic PDEs for the film thickness and salt concentration subject to non-conservative effects representing evaporation. We analytically prove the global existence of solutions to this model with mobility exponents in several different ranges and present numerical simulations that are in agreement with the analytic results. We also numerically capture other interesting dynamics of the model, including finite-time rupture-shock phenomenon due to the instabilities caused by locally elevated evaporation rates, convergence to equilibrium and infinite-time thinning.

  3. Model-based quantitative laser Doppler flowmetry in skin

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2010-09-01

    Laser Doppler flowmetry (LDF) can be used for assessing the microcirculatory perfusion. However, conventional LDF (cLDF) gives only a relative perfusion estimate for an unknown measurement volume, with no information about the blood flow speed distribution. To overcome these limitations, a model-based analysis method for quantitative LDF (qLDF) is proposed. The method uses inverse Monte Carlo technique with an adaptive three-layer skin model. By analyzing the optimal model where measured and simulated LDF spectra detected at two different source-detector separations match, the absolute microcirculatory perfusion for a specified speed region in a predefined volume is determined. qLDF displayed errors <12% when evaluated using simulations of physiologically relevant variations in the layer structure, in the optical properties of static tissue, and in blood absorption. Inhomogeneous models containing small blood vessels, hair, and sweat glands displayed errors <5%. Evaluation models containing single larger blood vessels displayed significant errors but could be dismissed by residual analysis. In vivo measurements using local heat provocation displayed a higher perfusion increase with qLDF than cLDF, due to nonlinear effects in the latter. The qLDF showed that the perfusion increase occurred due to an increased amount of red blood cells with a speed >1 mm/s.

  4. Quantitative model of the growth of floodplains by vertical accretion

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2000-01-01

    A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.

  5. Dental students' reflections about long-term care experiences through an existing model of oral health.

    PubMed

    Brondani, Mario; Pattanaporn, Komkham

    2017-09-01

    The aim of this study was to explore students' reflective thinking about long-term care experiences from the perspective of a model of oral health. A total of 186 reflections from 193 second-year undergraduate dental students enrolled between 2011/12 and 2014/15 at the University of British Columbia were explored qualitatively. Reflections had a word limit of 300, and students were asked to relate an existing model of oral health to their long-term care experiences. We have identified the main ideas via a thematic analysis related to the geriatric dentistry experience in long-term care. The thematic analysis revealed that students attempted to demystify their pre-conceived ideas about older people and long-term care facilities, to think outside the box, for example away from a typical dental office, and to consider caring for elderly people from an interprofessional lens. According to some students, not all domains from the existing model of oral health were directly relevant to their geriatric experience while other domains, including interprofessionalism and cognition, were missing. While some participants had a positive attitude towards caring for this cohort of the population, others did not take this educational activity as a constructive experience. The nature of most students' reflective thinking within a long-term care experience showed to be related to an existing model of oral health. This model can help to give meaning to the dental geriatric experience of an undergraduate curriculum. Such experience has been instrumental in overcoming potential misconceptions about long-term care and geriatric dentistry. © 2017 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.

  6. Global existence and asymptotic stability for a nonlinear integrodifferential equation modeling heat flow

    NASA Astrophysics Data System (ADS)

    Brandon, Deborah

    1989-06-01

    Initial value problems were studied that arise from models for 1-D heat flow (with finite wave speeds) in materials with memory. Under assumptions that ensure compatibility of the constitutive relations with the second law of thermodynamics, the resulting integrodifferential equation is hyperbolic near equilibrium. The existence is established of unique, global (in time) defined, classical solutions to the problems under consideration, provided the data are smooth and sufficiently close to equilibrium. Both Dirichlet and Neumann boundary conditions are treated as well as the problem on the entire real line. Local existence is proved using a contraction mapping argument which involves estimates for linear hyperbolic PDE's with variable coefficients. Global existence is obtained by deriving a priori energy estimates. These estimates are based on inequalities for strongly positive Volterra kernels (including a new inequality that is needed due to the form of the constitutive relations). Furthermore, compatibility with the second law plays an essential role in the proof in order to obtain an existence result under less restrictive assumptions on the data.

  7. Goal relevance as a quantitative model of human task relevance.

    PubMed

    Tanner, James; Itti, Laurent

    2017-03-01

    The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Numerical Modelling of Extended Leak-Off Test with a Pre-Existing Fracture

    NASA Astrophysics Data System (ADS)

    Lavrov, A.; Larsen, I.; Bauer, A.

    2016-04-01

    Extended leak-off test (XLOT) is one of the few techniques available for stress measurements in oil and gas wells. Interpretation of the test is often difficult since the results depend on a multitude of factors, including the presence of natural or drilling-induced fractures in the near-well area. Coupled numerical modelling of XLOT has been performed to investigate the pressure behaviour during the flowback phase as well as the effect of a pre-existing fracture on the test results in a low-permeability formation. Essential features of XLOT known from field measurements are captured by the model, including the saw-tooth shape of the pressure vs injected volume curve, and the change of slope in the pressure vs time curve during flowback used by operators as an indicator of the bottomhole pressure reaching the minimum in situ stress. Simulations with a pre-existing fracture running from the borehole wall in the radial direction have revealed that the results of XLOT are quite sensitive to the orientation of the pre-existing fracture. In particular, the fracture initiation pressure and the formation breakdown pressure increase steadily with decreasing angle between the fracture and the minimum in situ stress. Our findings seem to invalidate the use of the fracture initiation pressure and the formation breakdown pressure for stress measurements or rock strength evaluation purposes.

  9. Towards real-time change detection in videos based on existing 3D models

    NASA Astrophysics Data System (ADS)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  10. Evaluating habitat for black-footed ferrets: Revision of an existing model

    USGS Publications Warehouse

    Biggins, Dean E.; Lockhart, J. Michael; Godbey, Jerry L.

    2006-01-01

    Black-footed ferrets (Mustela nigripes) are highly dependent on prairie dogs (Cynomys spp.) as prey, and prairie dog colonies are the only known habitats that sustain black-footed ferret populations. An existing model used extensively for evaluating black-footed ferret reintroduction habitat defined complexes by interconnecting colonies with 7-km line segments. Although the 7-km complex remains a useful construct, we propose additional, smaller-scale evaluations that consider 1.5-km subcomplexes. The original model estimated the carrying capacity of complexes based on energy requirements of ferrets and density estimates of their prairie dog prey. Recent data have supported earlier contentions of intraspecific competition and intrasexual territorial behavior in ferrets. We suggest a revised model that retains the fixed linear relationship of the existing model when prairie dog densities are <18/ha and uses a curvilinear relationship that reflects increasing effects of ferret territoriality when there are 18–42 prairie dogs per hectare. We discuss possible effects of colony size and shape, interacting with territoriality, as justification for the exclusion of territorial influences if a prairie dog colony supports only a single female ferret. We also present data to support continued use of active prairie dog burrow densities as indices suitable for broad-scale estimates of prairie dog density. Calculation of percent of complexes that are occupied by prairie dog colonies was recommended as part of the original habitat evaluation process. That attribute has been largely ignored, resulting in rating anomalies.

  11. Dynamics of childhood growth and obesity: development and validation of a quantitative mathematical model

    PubMed Central

    Hall, Kevin D; Butte, Nancy F; Swinburn, Boyd A; Chow, Carson C

    2013-01-01

    Summary Background Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. Methods We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative predictions about weight-management interventions. The model was calibrated to reference body composition data in healthy children and validated by comparing model predictions with data other than those used to build the model. Findings The model accurately simulated the changes in body composition and energy expenditure reported in reference data during healthy growth, and predicted increases in energy intake from ages 5–18 years of roughly 1200 kcal per day in boys and 900 kcal per day in girls. Development of childhood obesity necessitated a substantially greater excess energy intake than for development of adult obesity. Furthermore, excess energy intake in overweight and obese children calculated by the model greatly exceeded the typical energy balance calculated on the basis of growth charts. At the population level, the excess weight of US children in 2003–06 was associated with a mean increase in energy intake of roughly 200 kcal per day per child compared with similar children in 1976–80. The model also suggests that therapeutic windows when children can outgrow obesity without losing weight might exist, especially during periods of high growth potential in boys who are not severely obese. Interpretation This model quantifies the energy excess underlying obesity and calculates the necessary intervention magnitude to achieve bodyweight change in children. Policy makers and clinicians now have a quantitative technique for understanding the childhood obesity epidemic and planning interventions to control it. PMID:24349967

  12. Dynamics of childhood growth and obesity: development and validation of a quantitative mathematical model.

    PubMed

    Hall, Kevin D; Butte, Nancy F; Swinburn, Boyd A; Chow, Carson C

    2013-10-01

    Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative predictions about weight-management interventions. The model was calibrated to reference body composition data in healthy children and validated by comparing model predictions with data other than those used to build the model. The model accurately simulated the changes in body composition and energy expenditure reported in reference data during healthy growth, and predicted increases in energy intake from ages 5-18 years of roughly 1200 kcal per day in boys and 900 kcal per day in girls. Development of childhood obesity necessitated a substantially greater excess energy intake than for development of adult obesity. Furthermore, excess energy intake in overweight and obese children calculated by the model greatly exceeded the typical energy balance calculated on the basis of growth charts. At the population level, the excess weight of US children in 2003-06 was associated with a mean increase in energy intake of roughly 200 kcal per day per child compared with similar children in 1971-74 [corrected]. The model also suggests that therapeutic windows when children can outgrow obesity without losing weight might exist, especially during periods of high growth potential in boys who are not severely obese. This model quantifies the energy excess underlying obesity and calculates the necessary intervention magnitude to achieve bodyweight change in children. Policy makers and clinicians now have a quantitative technique for understanding the childhood obesity epidemic and planning interventions to control it. Intramural Research Program of the National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases.

  13. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Quantitative models and strength of evidence in causal inference

    NASA Astrophysics Data System (ADS)

    Gerritsen, J.; Bailey, J.; Boschen, C.; Burton, J.; Lowman, B.; Ludwig, J.; Wilkes, S.; Wirts, J.; Zheng, L.

    2005-05-01

    Human activities such as mining, logging, agriculture and residential development have caused biological degradation to streams of West Virginia. Total Maximum Daily Loads (TMDLs) are being developed for all biologically-impaired streams within the state, and require causes of impairment to be identified so that pollutants can be controlled. Using a statewide dataset, we examined macroinvertebrate community response to single and multiple stressors, and applied two quantitative modeling approaches for ranking stressors. A "dirty reference" approach examined community composition in clean and predefined stressed sites, and tolerance values of individual taxa were estimated with reciprocal averaging. We integrated the empirical models of biological impairment with onsite field observations of biota, habitat, water quality, watershed observations, within a strength of evidence approach to infer causes of impairment. Candidate causes were screened to eliminate those shown not to co-occur with effects. Remaining candidate causes were ranked according to considerations of evidence within each watershed, as well as from the statewide empirical models and from other published sources. Strongest inferences were obtained where the independent predictive model agreed with within-watershed observations of stressor measures. Final stressor determinations for each watershed will be used for the development and implementation of TMDLs.

  15. A Quantitative Model to Estimate Drug Resistance in Pathogens

    PubMed Central

    Baker, Frazier N.; Cushion, Melanie T.; Porollo, Aleksey

    2016-01-01

    Pneumocystis pneumonia (PCP) is an opportunistic infection that occurs in humans and other mammals with debilitated immune systems. These infections are caused by fungi in the genus Pneumocystis, which are not susceptible to standard antifungal agents. Despite decades of research and drug development, the primary treatment and prophylaxis for PCP remains a combination of trimethoprim (TMP) and sulfamethoxazole (SMX) that targets two enzymes in folic acid biosynthesis, dihydrofolate reductase (DHFR) and dihydropteroate synthase (DHPS), respectively. There is growing evidence of emerging resistance by Pneumocystis jirovecii (the species that infects humans) to TMP-SMX associated with mutations in the targeted enzymes. In the present study, we report the development of an accurate quantitative model to predict changes in the binding affinity of inhibitors (Ki, IC50) to the mutated proteins. The model is based on evolutionary information and amino acid covariance analysis. Predicted changes in binding affinity upon mutations highly correlate with the experimentally measured data. While trained on Pneumocystis jirovecii DHFR/TMP data, the model shows similar or better performance when evaluated on the resistance data for a different inhibitor of PjDFHR, another drug/target pair (PjDHPS/SMX) and another organism (Staphylococcus aureus DHFR/TMP). Therefore, we anticipate that the developed prediction model will be useful in the evaluation of possible resistance of the newly sequenced variants of the pathogen and can be extended to other drug targets and organisms. PMID:28018911

  16. Towards Quantitative Spatial Models of Seabed Sediment Composition

    PubMed Central

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom’s parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  17. Local Existence of Weak Solutions to Kinetic Models of Granular Media

    NASA Astrophysics Data System (ADS)

    Agueh, Martial

    2016-08-01

    We prove in any dimension {d ≥q 1} a local in time existence of weak solutions to the Cauchy problem for the kinetic equation of granular media, partial_t f+v\\cdot nabla_x f = {div}_v[f(nabla W *_v f)] when the initial data are nonnegative, integrable and bounded functions with compact support in velocity, and the interaction potential {W} is a {C^2({{R}}^d)} radially symmetric convex function. Our proof is constructive and relies on a splitting argument in position and velocity, where the spatially homogeneous equation is interpreted as the gradient flow of a convex interaction energy with respect to the quadratic Wasserstein distance. Our result generalizes the local existence result obtained by Benedetto et al. (RAIRO Modél Math Anal Numér 31(5):615-641, 1997) on the one-dimensional model of this equation for a cubic power-law interaction potential.

  18. Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, I. I.; Turaev, D. V.

    2017-01-01

    We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.

  19. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    NASA Astrophysics Data System (ADS)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  20. Quantitative analysis of cyclic beta-turn models.

    PubMed Central

    Perczel, A.; Fasman, G. D.

    1992-01-01

    The beta-turn is a frequently found structural unit in the conformation of globular proteins. Although the circular dichroism (CD) spectra of the alpha-helix and beta-pleated sheet are well defined, there remains some ambiguity concerning the pure component CD spectra of the different types of beta-turns. Recently, it has been reported (Hollósi, M., Kövér, K.E., Holly, S., Radics, L., & Fasman, G.D., 1987, Biopolymers 26, 1527-1572; Perczel, A., Hollósi, M., Foxman, B.M., & Fasman, G.D., 1991a, J. Am. Chem. Soc. 113, 9772-9784) that some pseudohexapeptides (e.g., the cyclo[(delta)Ava-Gly-Pro-Aaa-Gly] where Aaa = Ser, Ser(OtBu), or Gly) in many solvents adopt a conformational mixture of type I and the type II beta-turns, although the X-ray-determined conformation was an ideal type I beta-turn. In addition to these pseudohexapeptides, conformational analysis was also carried out on three pseudotetrapeptides and three pseudooctapeptides. The target of the conformation analysis reported herein was to determine whether the ring stress of the above beta-turn models has an influence on their conformational properties. Quantitative nuclear Overhauser effect (NOE) measurements yielded interproton distances. The conformational average distances so obtained were interpreted utilizing molecular dynamics (MD) simulations to yield the conformational percentages. These conformational ratios were correlated with the conformational weights obtained by quantitative CD analysis of the same compounds. The pure component CD curves of type I and type II beta-turns were also obtained, using a recently developed algorithm (Perczel, A., Tusnády, G., Hollósi, M., & Fasman, G.D., 1991b, Protein Eng. 4(6), 669-679). For the first time the results of a CD deconvolution, based on the CD spectra of 14 beta-turn models, were assigned by quantitative NOE results. The NOE experiments confirmed the ratios of the component curves found for the two major beta-turns by CD analysis. These results

  1. Quantitative analysis of cyclic beta-turn models.

    PubMed

    Perczel, A; Fasman, G D

    1992-03-01

    The beta-turn is a frequently found structural unit in the conformation of globular proteins. Although the circular dichroism (CD) spectra of the alpha-helix and beta-pleated sheet are well defined, there remains some ambiguity concerning the pure component CD spectra of the different types of beta-turns. Recently, it has been reported (Hollósi, M., Kövér, K.E., Holly, S., Radics, L., & Fasman, G.D., 1987, Biopolymers 26, 1527-1572; Perczel, A., Hollósi, M., Foxman, B.M., & Fasman, G.D., 1991a, J. Am. Chem. Soc. 113, 9772-9784) that some pseudohexapeptides (e.g., the cyclo[(delta)Ava-Gly-Pro-Aaa-Gly] where Aaa = Ser, Ser(OtBu), or Gly) in many solvents adopt a conformational mixture of type I and the type II beta-turns, although the X-ray-determined conformation was an ideal type I beta-turn. In addition to these pseudohexapeptides, conformational analysis was also carried out on three pseudotetrapeptides and three pseudooctapeptides. The target of the conformation analysis reported herein was to determine whether the ring stress of the above beta-turn models has an influence on their conformational properties. Quantitative nuclear Overhauser effect (NOE) measurements yielded interproton distances. The conformational average distances so obtained were interpreted utilizing molecular dynamics (MD) simulations to yield the conformational percentages. These conformational ratios were correlated with the conformational weights obtained by quantitative CD analysis of the same compounds. The pure component CD curves of type I and type II beta-turns were also obtained, using a recently developed algorithm (Perczel, A., Tusnády, G., Hollósi, M., & Fasman, G.D., 1991b, Protein Eng. 4(6), 669-679). For the first time the results of a CD deconvolution, based on the CD spectra of 14 beta-turn models, were assigned by quantitative NOE results. The NOE experiments confirmed the ratios of the component curves found for the two major beta-turns by CD analysis. These results

  2. Using Existing Arctic Atmospheric Mercury Measurements to Refine Global and Regional Scale Atmospheric Transport Models

    NASA Astrophysics Data System (ADS)

    Moore, C. W.; Dastoor, A.; Steffen, A.; Nghiem, S. V.; Agnan, Y.; Obrist, D.

    2015-12-01

    Northern hemisphere background atmospheric concentrations of gaseous elemental mercury (GEM) have been declining by up to 25% over the last ten years at some lower latitude sites. However, this decline has ranged from no decline to 9% over 10 years at Arctic long-term measurement sites. Measurements also show a highly dynamic nature of mercury (Hg) species in Arctic air and snow from early spring to the end of summer when biogeochemical transformations peak. Currently, models are unable to reproduce this variability accurately. Estimates of Hg accumulation in the Arctic and Arctic Ocean by models require a full mechanistic understanding of the multi-phase redox chemistry of Hg in air and snow as well as the role of meteorology in the physicochemical processes of Hg. We will show how findings from ground-based atmospheric Hg measurements like those made in spring 2012 during the Bromine, Ozone and Mercury Experiment (BROMEX) near Barrow, Alaska can be used to reduce the discrepancy between measurements and model output in the Canadian GEM-MACH-Hg model. The model is able to reproduce and to explain some of the variability in Arctic Hg measurements but discrepancies still remain. One improvement involves incorporation of new physical mechanisms such as the one we were able to identify during BROMEX. This mechanism, by which atmospheric mercury depletion events are abruptly ended via sea ice leads opening and inducing shallow convective mixing that replenishes GEM (and ozone) in the near surface atmospheric layer, causing an immediate recovery from the depletion event, is currently lacking in models. Future implementation of this physical mechanism will have to incorporate current remote sensing sea ice products but also rely on the development of products that can identify sea ice leads quantitatively. In this way, we can advance the knowledge of the dynamic nature of GEM in the Arctic and the impact of climate change along with new regulations on the overall

  3. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  4. Monitoring with Trackers Based on Semi-Quantitative Models

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1997-01-01

    In three years of NASA-sponsored research preceding this project, we successfully developed a technology for: (1) building qualitative and semi-quantitative models from libraries of model-fragments, (2) simulating these models to predict future behaviors with the guarantee that all possible behaviors are covered, (3) assimilating observations into behaviors, shrinking uncertainty so that incorrect models are eventually refuted and correct models make stronger predictions for the future. In our object-oriented framework, a tracker is an object which embodies the hypothesis that the available observation stream is consistent with a particular behavior of a particular model. The tracker maintains its own status (consistent, superceded, or refuted), and answers questions about its explanation for past observations and its predictions for the future. In the MIMIC approach to monitoring of continuous systems, a number of trackers are active in parallel, representing alternate hypotheses about the behavior of a system. This approach is motivated by the need to avoid 'system accidents' [Perrow, 1985] due to operator fixation on a single hypothesis, as for example at Three Mile Island. As we began to address these issues, we focused on three major research directions that we planned to pursue over a three-year project: (1) tractable qualitative simulation, (2) semiquantitative inference, and (3) tracking set management. Unfortunately, funding limitations made it impossible to continue past year one. Nonetheless, we made major progress in the first two of these areas. Progress in the third area as slower because the graduate student working on that aspect of the project decided to leave school and take a job in industry. I enclosed a set of abstract of selected papers on the work describe below. Several papers that draw on the research supported during this period appeared in print after the grant period ended.

  5. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  6. Development of an Experimental Model of Diabetes Co-Existing with Metabolic Syndrome in Rats

    PubMed Central

    Suman, Rajesh Kumar; Ray Mohanty, Ipseeta; Borde, Manjusha K.; Maheshwari, Ujwala; Deshmukh, Y. A.

    2016-01-01

    Background. The incidence of metabolic syndrome co-existing with diabetes mellitus is on the rise globally. Objective. The present study was designed to develop a unique animal model that will mimic the pathological features seen in individuals with diabetes and metabolic syndrome, suitable for pharmacological screening of drugs. Materials and Methods. A combination of High-Fat Diet (HFD) and low dose of streptozotocin (STZ) at 30, 35, and 40 mg/kg was used to induce metabolic syndrome in the setting of diabetes mellitus in Wistar rats. Results. The 40 mg/kg STZ produced sustained hyperglycemia and the dose was thus selected for the study to induce diabetes mellitus. Various components of metabolic syndrome such as dyslipidemia {(increased triglyceride, total cholesterol, LDL cholesterol, and decreased HDL cholesterol)}, diabetes mellitus (blood glucose, HbA1c, serum insulin, and C-peptide), and hypertension {systolic blood pressure} were mimicked in the developed model of metabolic syndrome co-existing with diabetes mellitus. In addition to significant cardiac injury, atherogenic index, inflammation (hs-CRP), decline in hepatic and renal function were observed in the HF-DC group when compared to NC group rats. The histopathological assessment confirmed presence of edema, necrosis, and inflammation in heart, pancreas, liver, and kidney of HF-DC group as compared to NC. Conclusion. The present study has developed a unique rodent model of metabolic syndrome, with diabetes as an essential component. PMID:26880906

  7. Quantitative Genetics Model as the Unifying Model for Defining Genomic Relationship and Inbreeding Coefficient

    PubMed Central

    Wang, Chunkao; Da, Yang

    2014-01-01

    The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample. PMID:25517971

  8. Quantitative genetics model as the unifying model for defining genomic relationship and inbreeding coefficient.

    PubMed

    Wang, Chunkao; Da, Yang

    2014-01-01

    The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample.

  9. Mechanics of neutrophil phagocytosis: experiments and quantitative models.

    PubMed

    Herant, Marc; Heinrich, Volkmar; Dembo, Micah

    2006-05-01

    To quantitatively characterize the mechanical processes that drive phagocytosis, we observed the FcgammaR-driven engulfment of antibody-coated beads of diameters 3 mum to 11 mum by initially spherical neutrophils. In particular, the time course of cell morphology, of bead motion and of cortical tension were determined. Here, we introduce a number of mechanistic models for phagocytosis and test their validity by comparing the experimental data with finite element computations for multiple bead sizes. We find that the optimal models involve two key mechanical interactions: a repulsion or pressure between cytoskeleton and free membrane that drives protrusion, and an attraction between cytoskeleton and membrane newly adherent to the bead that flattens the cell into a thin lamella. Other models such as cytoskeletal expansion or swelling appear to be ruled out as main drivers of phagocytosis because of the characteristics of bead motion during engulfment. We finally show that the protrusive force necessary for the engulfment of large beads points towards storage of strain energy in the cytoskeleton over a large distance from the leading edge ( approximately 0.5 microm), and that the flattening force can plausibly be generated by the known concentrations of unconventional myosins at the leading edge.

  10. Quantitative Structure-Activity Relationship Modeling of Kinase Selectivity Profiles.

    PubMed

    Kothiwale, Sandeepkumar; Borza, Corina; Pozzi, Ambra; Meiler, Jens

    2017-09-19

    The discovery of selective inhibitors of biological target proteins is the primary goal of many drug discovery campaigns. However, this goal has proven elusive, especially for inhibitors targeting the well-conserved orthosteric adenosine triphosphate (ATP) binding pocket of kinase enzymes. The human kinome is large and it is rather difficult to profile early lead compounds against around 500 targets to gain an upfront knowledge on selectivity. Further, selectivity can change drastically during derivatization of an initial lead compound. Here, we have introduced a computational model to support the profiling of compounds early in the drug discovery pipeline. On the basis of the extensive profiled activity of 70 kinase inhibitors against 379 kinases, including 81 tyrosine kinases, we developed a quantitative structure-activity relation (QSAR) model using artificial neural networks, to predict the activity of these kinase inhibitors against the panel of 379 kinases. The model's performance in predicting activity ranges from 0.6 to 0.8 depending on the kinase, from the area under the curve (AUC) of the receiver operating characteristics (ROC). The profiler is available online at http://www.meilerlab.org/index.php/servers/show?s_id=23.

  11. Quantitative results for square gradient models of fluids

    NASA Astrophysics Data System (ADS)

    Kong, Ling-Ti; Vriesinga, Dan; Denniston, Colin

    2011-03-01

    Square gradient models for fluids are extensively used because they are believed to provide a good qualitative understanding of the essential physics. However, unlike elasticity theory for solids, there are few quantitative results for specific (as opposed to generic) fluids. Indeed the only numerical value of the square gradient coefficients for specific fluids have been inferred from attempts to match macroscopic properties such as surface tensions rather than from direct measurement. We employ all-atom molecular dynamics, using the TIP3P and OPLS force fields, to directly measure the coefficients of the density gradient expansion for several real fluids. For all liquids measured, including water, we find that the square gradient coefficient is negative, suggesting the need for some regularization of a model including only the square gradient, but only at wavelengths comparable to the molecular separation of molecules. The implications for liquid-gas interfaces are also examined. Remarkably, the square gradient model is found to give a reasonably accurate description of density fluctuations in the liquid state down to wavelengths close to atomic size.

  12. Quantitative rubber sheet models of gravitation wells using Spandex

    NASA Astrophysics Data System (ADS)

    White, Gary

    2008-04-01

    Long a staple of introductory treatments of general relativity, the rubber sheet model exhibits Wheeler's concise summary---``Matter tells space-time how to curve and space-time tells matter how to move''---very nicely. But what of the quantitative aspects of the rubber sheet model: how far can the analogy be pushed? We show^1 that when a mass M is suspended from the center of an otherwise unstretched elastic sheet affixed to a circular boundary it exhibits a distortion far from the center given by h = A*(M*r^2)^1/3 . Here, as might be expected, h and r are the vertical and axial distances from the center, but this result is not the expected logarithmic form of 2-D solutions to LaPlace's equation (the stretched drumhead). This surprise has a natural explanation and is confirmed experimentally with Spandex as the medium, and its consequences for general rubber sheet models are pursued. ^1``The shape of `the Spandex' and orbits upon its surface'', American Journal of Physics, 70, 48-52 (2002), G. D. White and M. Walker. See also the comment by Don S. Lemons and T. C. Lipscombe, also in AJP, 70, 1056-1058 (2002).

  13. Existence and qualitative properties of travelling waves for an epidemiological model with mutations

    NASA Astrophysics Data System (ADS)

    Griette, Quentin; Raoul, Gaël

    2016-05-01

    In this article, we are interested in a non-monotonic system of logistic reaction-diffusion equations. This system of equations models an epidemic where two types of pathogens are competing, and a mutation can change one type into the other with a certain rate. We show the existence of travelling waves with minimal speed, which are usually non-monotonic. Then we provide a description of the shape of those constructed travelling waves, and relate them to some Fisher-KPP fronts with non-minimal speed.

  14. Two phase modeling of nanofluid flow in existence of melting heat transfer by means of HAM

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, M.; Jafaryar, M.; Bateni, K.; Ganji, D. D.

    2017-08-01

    In this article, Buongiorno Model is applied for investigation of nanofluid flow over a stretching plate in existence of magnetic field. Radiation and Melting heat transfer are taken into account. Homotopy analysis method (HAM) is selected to solve ODEs which are obtained from similarity transformation. Roles of Brownian motion, thermophoretic parameter, Hartmann number, porosity parameter, Melting parameter and Eckert number are presented graphically. Results indicate that nanofluid velocity and concentration enhance with rise of melting parameter. Nusselt number reduces with increase of porosity and melting parameters.

  15. Existence of a line of critical points in a two-dimensional Lebwohl Lasher model

    NASA Astrophysics Data System (ADS)

    Shabnam, Sabana; DasGupta, Sudeshna; Roy, Soumen Kumar

    2016-02-01

    Controversy regarding transitions in systems with global symmetry group O(3) has attracted the attention of researchers and the detailed nature of this transition is still not well understood. As an example of such a system in this paper we have studied a two-dimensional Lebwohl Lasher model, using the Wolff cluster algorithm. Though we have not been able to reach any definitive conclusions regarding the order present in the system, from finite size scaling analysis, hyperscaling relations and the behavior of the correlation function we have obtained strong indications regarding the presence of quasi-long range order and the existence of a line of critical points in our system.

  16. A generalised individual-based algorithm for modelling the evolution of quantitative herbicide resistance in arable weed populations.

    PubMed

    Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul

    2017-02-01

    Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  17. Permeability prediction of high Spor samples from spectral induced polarization (SIP): limitations of existing models

    NASA Astrophysics Data System (ADS)

    Robinson, J.; Slater, L. D.; Keating, K.; Parker, B. L.; Day-Lewis, F. D.; Robinson, T.

    2016-12-01

    Over the past two decades, mechanistic and empirical models have been proposed to predict permeability from spectral induced polarization (SIP) data. Given the sensitivity to mineral surfaces and pore spaces, SIP models that use length scales related to the pore volume normalized surface area (Spor) or pore diameter have been proposed. We performed extensive petrophysical measurements on two sandstone formations with contrasting lithology to investigate the sensitivity of SIP measurements to (1) Spor, and (2) the diameter at which pores are considered interconnected (Λ) as defined by mercury injection porosimetry. We then compared these hydraulic length scales to SIP measures of length scales associated with interfacial polarization. Imaginary conductivity, mean relaxation time () and normalized chargeability from a Debye decomposition were correlated with Spor and Λ. Application of these SIP proxies of hydraulic length scale in recently proposed SIP permeability models revealed that permeability of high Spor samples over-predict measured values by an order of magnitude. The high Spor samples were also outliers when permeability was predicted based on a recent model formulated in terms of the SIP relaxation time, electrical formation factor and a single, fixed diffusion coefficient. Improved permeability prediction from this model was explored by determining apparent diffusion coefficients based on formation type, Spor and imaginary conductivity. A Spor -specific diffusion coefficient improved the permeability predictions, although a similar improvement was not obtained using imaginary conductivity. Our findings suggest that existing SIP models for permeability prediction underperform in the case of low permeability, high Spor materials.

  18. Melanoma screening: Informing public health policy with quantitative modelling.

    PubMed

    Gilmore, Stephen

    2017-01-01

    Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved) by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in 1982 has resulted in

  19. Quantitative Modeling of the Alternative Pathway of the Complement System

    PubMed Central

    Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection. PMID

  20. Modelling bacterial growth in quantitative microbiological risk assessment: is it possible?

    PubMed

    Nauta, Maarten J

    2002-03-01

    Quantitative microbiological risk assessment (QMRA), predictive modelling and HACCP may be used as tools to increase food safety and can be integrated fruitfully for many purposes. However, when QMRA is applied for public health issues like the evaluation of the status of public health, existing predictive models may not be suited to model bacterial growth. In this context, precise quantification of risks is more important than in the context of food manufacturing alone. In this paper, the modular process risk model (MPRM) is briefly introduced as a QMRA modelling framework. This framework can be used to model the transmission of pathogens through any food pathway, by assigning one of six basic processes (modules) to each of the processing steps. Bacterial growth is one of these basic processes. For QMRA, models of bacterial growth need to be expressed in terms of probability, for example to predict the probability that a critical concentration is reached within a certain amount of time. In contrast, available predictive models are developed and validated to produce point estimates of population sizes and therefore do not fit with this requirement. Recent experience from a European risk assessment project is discussed to illustrate some of the problems that may arise when predictive growth models are used in QMRA. It is suggested that a new type of predictive models needs to be developed that incorporates modelling of variability and uncertainty in growth.

  1. Mixed quantitative/qualitative modeling and simulation of the cardiovascular system.

    PubMed

    Nebot, A; Cellier, F E; Vallverdú, M

    1998-02-01

    The cardiovascular system is composed of the hemodynamical system and the central nervous system (CNS) control. Whereas the structure and functioning of the hemodynamical system are well known and a number of quantitative models have already been developed that capture the behavior of the hemodynamical system fairly accurately, the CNS control is, at present, still not completely understood and no good deductive models exist that are able to describe the CNS control from physical and physiological principles. The use of qualitative methodologies may offer an interesting alternative to quantitative modeling approaches for inductively capturing the behavior of the CNS control. In this paper, a qualitative model of the CNS control of the cardiovascular system is developed by means of the fuzzy inductive reasoning (FIR) methodology. FIR is a fairly new modeling technique that is based on the general system problem solving (GSPS) methodology developed by G.J. Klir (Architecture of Systems Problem Solving, Plenum Press, New York, 1985). Previous investigations have demonstrated the applicability of this approach to modeling and simulating systems, the structure of which is partially or totally unknown. In this paper, five separate controller models for different control actuations are described that have been identified independently using the FIR methodology. Then the loop between the hemodynamical system, modeled by means of differential equations, and the CNS control, modeled in terms of five FIR models, is closed, in order to study the behavior of the cardiovascular system as a whole. The model described in this paper has been validated for a single patient only.

  2. An existence result for a model of complete damage in elastic materials with reversible evolution

    NASA Astrophysics Data System (ADS)

    Bonetti, Elena; Freddi, Francesco; Segatti, Antonio

    2017-01-01

    In this paper, we consider a model describing evolution of damage in elastic materials, in which stiffness completely degenerates once the material is fully damaged. The model is written by using a phase transition approach, with respect to the damage parameter. In particular, a source of damage is represented by a quadratic form involving deformations, which vanishes in the case of complete damage. Hence, an internal constraint is ensured by a maximal monotone operator. The evolution of damage is considered "reversible", in the sense that the material may repair itself. We can prove an existence result for a suitable weak formulation of the problem, rewritten in terms of a new variable (an internal stress). Some numerical simulations are presented in agreement with the mathematical analysis of the system.

  3. Quantitative multiphase model for hydrothermal liquefaction of algal biomass

    SciTech Connect

    Li, Yalin; Leow, Shijie; Fedders, Anna C.; Sharma, Brajendra K.; Guest, Jeremy S.; Strathmann, Timothy J.

    2017-01-01

    Optimized incorporation of hydrothermal liquefaction (HTL, reaction in water at elevated temperature and pressure) within an integrated biorefinery requires accurate models to predict the quantity and quality of all HTL products. Existing models primarily focus on biocrude product yields with limited consideration for biocrude quality and aqueous, gas, and biochar co-products, and have not been validated with an extensive collection of feedstocks. In this study, HTL experiments (300 degrees C, 30 min) were conducted using 24 different batches of microalgae feedstocks with distinctive feedstock properties, which resulted in a wide range of biocrude (21.3-54.3 dry weight basis, dw%), aqueous (4.6-31.2 dw%), gas (7.1-35.6 dw%), and biochar (1.3-35.0 dw%) yields.

  4. Model based prediction of the existence of the spontaneous cochlear microphonic

    NASA Astrophysics Data System (ADS)

    Ayat, Mohammad; Teal, Paul D.

    2015-12-01

    In the mammalian cochlea, self-sustaining oscillation of the basilar membrane in the cochlea can cause vibration of the ear drum, and produce spontaneous narrow-band air pressure fluctuations in the ear canal. These spontaneous fluctuations are known as spontaneous otoacoustic emissions. Small perturbations in feedback gain of the cochlear amplifier have been proposed to be the generation source of self-sustaining oscillations of the basilar membrane. We hypothesise that the self-sustaining oscillation resulting from small perturbations in feedback gain produce spontaneous potentials in the cochlea. We demonstrate that according to the results of the model, a measurable spontaneous cochlear microphonic must exist in the human cochlea. The existence of this signal has not yet been reported. However, this spontaneous electrical signal could play an important role in auditory research. Successful or unsuccessful recording of this signal will indicate whether previous hypotheses about the generation source of spontaneous otoacoustic emissions are valid or should be amended. In addition according to the proposed model spontaneous cochlear microphonic is basically an electrical analogue of spontaneous otoacoustic emissions. In certain experiments, spontaneous cochlear microphonic may be more easily detected near its generation site with proper electrical instrumentation than is spontaneous otoacoustic emission.

  5. Quantitative Model of microRNA-mRNA interaction

    NASA Astrophysics Data System (ADS)

    Noorbakhsh, Javad; Lang, Alex; Mehta, Pankaj

    2012-02-01

    MicroRNAs are short RNA sequences that regulate gene expression and protein translation by binding to mRNA. Experimental data reveals the existence of a threshold linear output of protein based on the expression level of microRNA. To understand this behavior, we propose a mathematical model of the chemical kinetics of the interaction between mRNA and microRNA. Using this model we have been able to quantify the threshold linear behavior. Furthermore, we have studied the effect of internal noise, showing the existence of an intermediary regime where the expression level of mRNA and microRNA has the same order of magnitude. In this crossover regime the mRNA translation becomes sensitive to small changes in the level of microRNA, resulting in large fluctuations in protein levels. Our work shows that chemical kinetics parameters can be quantified by studying protein fluctuations. In the future, studying protein levels and their fluctuations can provide a powerful tool to study the competing endogenous RNA hypothesis (ceRNA), in which mRNA crosstalk occurs due to competition over a limited pool of microRNAs.

  6. Quantitative Analysis of Intracellular Motility Based on Optical Flow Model

    PubMed Central

    Li, Heng

    2017-01-01

    Analysis of cell mobility is a key issue for abnormality identification and classification in cell biology research. However, since cell deformation induced by various biological processes is random and cell protrusion is irregular, it is difficult to measure cell morphology and motility in microscopic images. To address this dilemma, we propose an improved variation optical flow model for quantitative analysis of intracellular motility, which not only extracts intracellular motion fields effectively but also deals with optical flow computation problem at the border by taking advantages of the formulation based on L1 and L2 norm, respectively. In the energy functional of our proposed optical flow model, the data term is in the form of L2 norm; the smoothness of the data changes with regional features through an adaptive parameter, using L1 norm near the edge of the cell and L2 norm away from the edge. We further extract histograms of oriented optical flow (HOOF) after optical flow field of intracellular motion is computed. Then distances of different HOOFs are calculated as the intracellular motion features to grade the intracellular motion. Experimental results show that the features extracted from HOOFs provide new insights into the relationship between the cell motility and the special pathological conditions.

  7. A poultry-processing model for quantitative microbiological risk assessment.

    PubMed

    Nauta, Maarten; van der Fels-Klerx, Ine; Havelaar, Arie

    2005-02-01

    A poultry-processing model for a quantitative microbiological risk assessment (QMRA) of campylobacter is presented, which can also be applied to other QMRAs involving poultry processing. The same basic model is applied in each consecutive stage of industrial processing. It describes the effects of inactivation and removal of the bacteria, and the dynamics of cross-contamination in terms of the transfer of campylobacter from the intestines to the carcass surface and the environment, from the carcasses to the environment, and from the environment to the carcasses. From the model it can be derived that, in general, the effect of inactivation and removal is dominant for those carcasses with high initial bacterial loads, and cross-contamination is dominant for those with low initial levels. In other QMRA poultry-processing models, the input-output relationship between the numbers of bacteria on the carcasses is usually assumed to be linear on a logarithmic scale. By including some basic mechanistics, it is shown that this may not be realistic. As nonlinear behavior may affect the predicted effects of risk mitigations; this finding is relevant for risk management. Good knowledge of the variability of bacterial loads on poultry entering the process is important. The common practice in microbiology to only present geometric mean of bacterial counts is insufficient: arithmetic mean are more suitable, in particular, to describe the effect of cross-contamination. The effects of logistic slaughter (scheduled processing) as a risk mitigation strategy are predicted to be small. Some additional complications in applying microbiological data obtained in processing plants are discussed.

  8. On the generalized poisson regression mixture model for mapping quantitative trait loci with count data.

    PubMed

    Cui, Yuehua; Kim, Dong-Yun; Zhu, Jun

    2006-12-01

    Statistical methods for mapping quantitative trait loci (QTL) have been extensively studied. While most existing methods assume normal distribution of the phenotype, the normality assumption could be easily violated when phenotypes are measured in counts. One natural choice to deal with count traits is to apply the classical Poisson regression model. However, conditional on covariates, the Poisson assumption of mean-variance equality may not be valid when data are potentially under- or overdispersed. In this article, we propose an interval-mapping approach for phenotypes measured in counts. We model the effects of QTL through a generalized Poisson regression model and develop efficient likelihood-based inference procedures. This approach, implemented with the EM algorithm, allows for a genomewide scan for the existence of QTL throughout the entire genome. The performance of the proposed method is evaluated through extensive simulation studies along with comparisons with existing approaches such as the Poisson regression and the generalized estimating equation approach. An application to a rice tiller number data set is given. Our approach provides a standard procedure for mapping QTL involved in the genetic control of complex traits measured in counts.

  9. Quantitative comparisons of analogue models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  10. Endoscopic skull base training using 3D printed models with pre-existing pathology.

    PubMed

    Narayanan, Vairavan; Narayanan, Prepageran; Rajagopalan, Raman; Karuppiah, Ravindran; Rahman, Zainal Ariff Abdul; Wormald, Peter-John; Van Hasselt, Charles Andrew; Waran, Vicknes

    2015-03-01

    Endoscopic base of skull surgery has been growing in acceptance in the recent past due to improvements in visualisation and micro instrumentation as well as the surgical maturing of early endoscopic skull base practitioners. Unfortunately, these demanding procedures have a steep learning curve. A physical simulation that is able to reproduce the complex anatomy of the anterior skull base provides very useful means of learning the necessary skills in a safe and effective environment. This paper aims to assess the ease of learning endoscopic skull base exposure and drilling techniques using an anatomically accurate physical model with a pre-existing pathology (i.e., basilar invagination) created from actual patient data. Five models of a patient with platy-basia and basilar invagination were created from the original MRI and CT imaging data of a patient. The models were used as part of a training workshop for ENT surgeons with varying degrees of experience in endoscopic base of skull surgery, from trainees to experienced consultants. The surgeons were given a list of key steps to achieve in exposing and drilling the skull base using the simulation model. They were then asked to list the level of difficulty of learning these steps using the model. The participants found the models suitable for learning registration, navigation and skull base drilling techniques. All participants also found the deep structures to be accurately represented spatially as confirmed by the navigation system. These models allow structured simulation to be conducted in a workshop environment where surgeons and trainees can practice to perform complex procedures in a controlled fashion under the supervision of experts.

  11. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models.

    PubMed

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-08-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality.However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management.We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee - varroa mite - virus interactions.We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes.Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions.

  12. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models

    PubMed Central

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-01-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality. However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management. We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee – varroa mite – virus interactions. We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes. Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions. PMID:24223431

  13. Quantitative phase-field modeling for boiling phenomena

    NASA Astrophysics Data System (ADS)

    Badillo, Arnoldo

    2012-10-01

    A phase-field model is developed for quantitative simulation of bubble growth in the diffusion-controlled regime. The model accounts for phase change and surface tension effects at the liquid-vapor interface of pure substances with large property contrast. The derivation of the model follows a two-fluid approach, where the diffuse interface is assumed to have an internal microstructure, defined by a sharp interface. Despite the fact that phases within the diffuse interface are considered to have their own velocities and pressures, an averaging procedure at the atomic scale, allows for expressing all the constitutive equations in terms of mixture quantities. From the averaging procedure and asymptotic analysis of the model, nonconventional terms appear in the energy and phase-field equations to compensate for the variation of the properties across the diffuse interface. Without these new terms, no convergence towards the sharp-interface model can be attained. The asymptotic analysis also revealed a very small thermal capillary length for real fluids, such as water, that makes impossible for conventional phase-field models to capture bubble growth in the millimeter range size. For instance, important phenomena such as bubble growth and detachment from a hot surface could not be simulated due to the large number of grids points required to resolve all the scales. Since the shape of the liquid-vapor interface is primarily controlled by the effects of an isotropic surface energy (surface tension), a solution involving the elimination of the curvature from the phase-field equation is devised. The elimination of the curvature from the phase-field equation changes the length scale dominating the phase change from the thermal capillary length to the thickness of the thermal boundary layer, which is several orders of magnitude larger. A detailed analysis of the phase-field equation revealed that a split of this equation into two independent parts is possible for system sizes

  14. Quantitative property-structural relation modeling on polymeric dielectric materials

    NASA Astrophysics Data System (ADS)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  15. Are existing size-fractionated primary production models appropriate for UK shelf Seas?

    NASA Astrophysics Data System (ADS)

    Curran, K. F.; Tilstone, G.

    2016-02-01

    The shelf seas are some of Earth's most productive oceanic environments. Though they comprise <10% of the oceans' area, it is estimated that they contribute between 15 - 20% to annual global marine primary productivity and support the majority of commercial fisheries. This primary production is modified by changes in the taxonomic composition and size spectrum of phytoplankton, which in turn influences pelagic biogeochemistry, by altering elemental stoichiometry and carbon export efficiency. Deriving primary production from remote sensing data has undoubtedly aided our understanding of these processes. However, few models have been developed for estimating taxonomic, size class or functional type-specific primary production in the shelf seas. Current methods to derive size-fractionated primary production depend on modelling size class-specific parameters from the entire community by using measurements such as diagnostic pigment concentrations. There is a paucity of direct photo-physiological measurements for distinct size classes in situ, and therefore limited data for validation of bio-optical production models applied to these dynamic regions. In this paper we use bio-optical parameters for three phytoplankton size classes (>20 µm, 20-2.0 µm and 2.0-0.2 µm) measured at two time-series stations in the Western English Channel during Spring 2014 to late Summer 2015 and from two cruises in the Celtic Sea during August 2014 and April 2015 to validate an existing model for deriving size-fractioned primary production from satellite data. The results indicate that current models of size-fractionated primary production, which are calibrated and validated using open ocean samples, are not accurate for the UK shelf seas due to errors in the assignment of size-specific photosynthetic parameters. Based on the size-fractionated bio-optical measurements made in the Celtic Sea and Western English Channel, we suggest ways in which these current models can be improved.

  16. Quantitative phase-field modeling for wetting phenomena.

    PubMed

    Badillo, Arnoldo

    2015-03-01

    A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates.

  17. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  18. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  19. Quantitative multiphase model for hydrothermal liquefaction of algal biomass

    DOE PAGES

    Li, Yalin; Leow, Shijie; Fedders, Anna C.; ...

    2017-01-17

    Here, optimized incorporation of hydrothermal liquefaction (HTL, reaction in water at elevated temperature and pressure) within an integrated biorefinery requires accurate models to predict the quantity and quality of all HTL products. Existing models primarily focus on biocrude product yields with limited consideration for biocrude quality and aqueous, gas, and biochar co-products, and have not been validated with an extensive collection of feedstocks. In this study, HTL experiments (300 °C, 30 min) were conducted using 24 different batches of microalgae feedstocks with distinctive feedstock properties, which resulted in a wide range of biocrude (21.3–54.3 dry weight basis, dw%), aqueous (4.6–31.2more » dw%), gas (7.1–35.6 dw%), and biochar (1.3–35.0 dw%) yields. Based on these results, a multiphase component additivity (MCA) model was introduced to predict yields and characteristics of the HTL biocrude product and aqueous, gas, and biochar co-products, with only feedstock biochemical (lipid, protein, carbohydrate, and ash) and elemental (C/H/N) composition as model inputs. Biochemical components were determined to distribute across biocrude product/HTL co-products as follows: lipids to biocrude; proteins to biocrude > aqueous > gas; carbohydrates to gas ≈ biochar > biocrude; and ash to aqueous > biochar. Modeled quality indicators included biocrude C/H/N contents, higher heating value (HHV), and energy recovery (ER); aqueous total organic carbon (TOC) and total nitrogen (TN) contents; and biochar carbon content. The model was validated with HTL data from the literature, the potential to expand the application of this modeling framework to include waste biosolids (e.g., wastewater sludge, manure) was explored, and future research needs for industrial application were identified. Ultimately, the MCA model represents a critical step towards the integration of cultivation models with downstream HTL and biorefinery operations to enable system

  20. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.

    PubMed

    Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-04-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes.

  1. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... ``Review of Quantitative Software Reliability Methods,'' BNL- 94047-2010 (ADAMS Accession No....

  2. A parametric investigation of an existing supersonic relative tip speed propeller noise model. [turboprop aircraft

    NASA Technical Reports Server (NTRS)

    Dittmar, J. H.

    1977-01-01

    A high tip speed turboprop is being considered as a future energy conservative airplane. The high tip speed of the propeller combined with the cruise speed of the airplane may result in supersonic relative flow on the propeller tips. These supersonic blade sections could generate noise that is a cabin environment problem. An existing supersonic propeller noise model was parametrically investigated to identify and evaluate the noise reduction variables. Both independent and interdependent parameter variations (constant propeller thrust) were performed. The noise reductions indicated by the independent investigation varied from sizable in the case of reducing Mach number to minimal for adjusting the thickness and loading distributions. The noise reduction possibilities of decreasing relative Mach number were further investigated during the interdependent variations. The interdependent investigation indicated that significant noise reductions could be achieved by increasing the propeller diameter and/or increasing the number of propeller blades while maintaining a constant propeller thrust.

  3. Frequency domain modeling and dynamic characteristics evaluation of existing wind turbine systems

    NASA Astrophysics Data System (ADS)

    Chiang, Chih-Hung; Yu, Chih-Peng

    2016-04-01

    It is quite well accepted that frequency domain procedures are suitable for the design and dynamic analysis of wind turbine structures, especially for floating offshore wind turbines, since random wind loads and wave induced motions are most likely simulated in the frequency domain. This paper presents specific applications of an effective frequency domain scheme to the linear analysis of wind turbine structures in which a 1-D spectral element was developed based on the axially-loaded member. The solution schemes are summarized for the spectral analyses of the tower, the blades, and the combined system with selected frequency-dependent coupling effect from foundation-structure interactions. Numerical examples demonstrate that the modal frequencies obtained using spectral-element models are in good agreement with those found in the literature. A 5-element mono-pile model results in less than 0.3% deviation from an existing 160-element model. It is preliminarily concluded that the proposed scheme is relatively efficient in performing quick verification for test data obtained from the on-site vibration measurement using the microwave interferometer.

  4. Testing process predictions of models of risky choice: a quantitative model comparison approach

    PubMed Central

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  5. Testing process predictions of models of risky choice: a quantitative model comparison approach.

    PubMed

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called "similarity." In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  6. Functional coverage of the human genome by existing structures, structural genomics targets, and homology models.

    PubMed

    Xie, Lei; Bourne, Philip E

    2005-08-01

    The bias in protein structure and function space resulting from experimental limitations and targeting of particular functional classes of proteins by structural biologists has long been recognized, but never continuously quantified. Using the Enzyme Commission and the Gene Ontology classifications as a reference frame, and integrating structure data from the Protein Data Bank (PDB), target sequences from the structural genomics projects, structure homology derived from the SUPERFAMILY database, and genome annotations from Ensembl and NCBI, we provide a quantified view, both at the domain and whole-protein levels, of the current and projected coverage of protein structure and function space relative to the human genome. Protein structures currently provide at least one domain that covers 37% of the functional classes identified in the genome; whole structure coverage exists for 25% of the genome. If all the structural genomics targets were solved (twice the current number of structures in the PDB), it is estimated that structures of one domain would cover 69% of the functional classes identified and complete structure coverage would be 44%. Homology models from existing experimental structures extend the 37% coverage to 56% of the genome as single domains and 25% to 31% for complete structures. Coverage from homology models is not evenly distributed by protein family, reflecting differing degrees of sequence and structure divergence within families. While these data provide coverage, conversely, they also systematically highlight functional classes of proteins for which structures should be determined. Current key functional families without structure representation are highlighted here; updated information on the "most wanted list" that should be solved is available on a weekly basis from http://function.rcsb.org:8080/pdb/function_distribution/index.html.

  7. Using Existing Coastal Models To Address Ocean Acidification Modeling Needs: An Inside Look at Several East and Gulf Coast Regions

    NASA Astrophysics Data System (ADS)

    Jewett, E.

    2013-12-01

    Ecosystem forecast models have been in development for many US coastal regions for decades in an effort to understand how certain drivers, such as nutrients, freshwater and sediments, affect coastal water quality. These models have been used to inform coastal management interventions such as imposition of total maximum daily load allowances for nutrients or sediments to control hypoxia, harmful algal blooms and/or water clarity. Given the overlap of coastal acidification with hypoxia, it seems plausible that the geochemical models built to explain hypoxia and/or HABs might also be used, with additional terms, to understand how atmospheric CO2 is interacting with local biogeochemical processes to affect coastal waters. Examples of existing biogeochemical models from Galveston, the northern Gulf of Mexico, Tampa Bay, West Florida Shelf, Pamlico Sound, Chesapeake Bay, and Narragansett Bay will be presented and explored for suitability for ocean acidification modeling purposes.

  8. A quantitative speciation model for the adsorption of organic pollutants on activated carbon.

    PubMed

    Grivé, M; García, D; Domènech, C; Richard, L; Rojo, I; Martínez, X; Rovira, M

    2013-01-01

    Granular activated carbon (GAC) is commonly used as adsorbent in water treatment plants given its high capacity for retaining organic pollutants in aqueous phase. The current knowledge on GAC behaviour is essentially empirical, and no quantitative description of the chemical relationships between GAC surface groups and pollutants has been proposed. In this paper, we describe a quantitative model for the adsorption of atrazine onto GAC surface. The model is based on results of potentiometric titrations and three types of adsorption experiments which have been carried out in order to determine the nature and distribution of the functional groups on the GAC surface, and evaluate the adsorption characteristics of GAC towards atrazine. Potentiometric titrations have indicated the existence of at least two different families of chemical groups on the GAC surface, including phenolic- and benzoic-type surface groups. Adsorption experiments with atrazine have been satisfactorily modelled with the geochemical code PhreeqC, assuming that atrazine is sorbed onto the GAC surface in equilibrium (log Ks = 5.1 ± 0.5). Independent thermodynamic calculations suggest a possible adsorption of atrazine on a benzoic derivative. The present work opens a new approach for improving the adsorption capabilities of GAC towards organic pollutants by modifying its chemical properties.

  9. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies.

  10. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-01-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. PMID:26058849

  11. Existence and uniqueness of endemic states for the age-structured S-I-R epidemic model.

    PubMed

    Cha, Y; Iannelli, M; Milner, F A

    1998-06-15

    The existence and uniqueness of positive steady states for the age structured S-I-R epidemic model with intercohort transmission is considered. Threshold results for the existence of endemic states are established for most cases. Uniqueness is shown in each case. Threshold used are explicitly computable in terms of demographic and epidemiological parameters of the model.

  12. Quantitative phase-field modeling of two-phase growth

    NASA Astrophysics Data System (ADS)

    Folch, R.; Plapp, M.

    2005-07-01

    A phase-field model that allows for quantitative simulations of low-speed eutectic and peritectic solidification under typical experimental conditions is developed. Its cornerstone is a smooth free-energy functional, specifically designed so that the stable solutions that connect any two phases are completely free of the third phase. For the simplest choice for this functional, the equations of motion for each of the two solid-liquid interfaces can be mapped to the standard phase-field model of single-phase solidification with its quartic double-well potential. By applying the thin-interface asymptotics and by extending the antitrapping current previously developed for this model, all spurious corrections to the dynamics of the solid-liquid interfaces linear in the interface thickness W can be eliminated. This means that, for small enough values of W , simulation results become independent of it. As a consequence, accurate results can be obtained using values of W much larger than the physical interface thickness, which yields a tremendous gain in computational power and makes simulations for realistic experimental parameters feasible. Convergence of the simulation outcome with decreasing W is explicitly demonstrated. Furthermore, the results are compared to a boundary-integral formulation of the corresponding free-boundary problem. Excellent agreement is found, except in the immediate vicinity of bifurcation points, a very sensitive situation where noticeable differences arise. These differences reveal that, in contrast to the standard assumptions of the free-boundary problem, out of equilibrium the diffuse trijunction region of the phase-field model can (i) slightly deviate from Young’s law for the contact angles, and (ii) advance in a direction that forms a finite angle with the solid-solid interface at each instant. While the deviation (i) extrapolates to zero in the limit of vanishing interface thickness, the small angle in (ii) remains roughly constant

  13. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models.

    PubMed

    Shimayoshi, Takao; Cha, Chae Young; Amano, Akira

    2015-01-01

    Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics.

  14. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.

    PubMed

    Gritsenko, Alexey A; Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick

    2015-08-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.

  15. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data

    PubMed Central

    Gritsenko, Alexey A.; Hulsman, Marc; Reinders, Marcel J. T.; de Ridder, Dick

    2015-01-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates. PMID:26275099

  16. Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy

    ERIC Educational Resources Information Center

    Smith, Rachel; Cantrell, Kevin

    2007-01-01

    Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.

  17. Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy

    ERIC Educational Resources Information Center

    Smith, Rachel; Cantrell, Kevin

    2007-01-01

    Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.

  18. Herd immunity and pneumococcal conjugate vaccine: a quantitative model.

    PubMed

    Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S

    2007-07-20

    Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non

  19. Quantitative Models of the Dose-Response and Time Course of Inhalational Anthrax in Humans

    PubMed Central

    Schell, Wiley A.; Bulmahn, Kenneth; Walton, Thomas E.; Woods, Christopher W.; Coghill, Catherine; Gallegos, Frank; Samore, Matthew H.; Adler, Frederick R.

    2013-01-01

    Anthrax poses a community health risk due to accidental or intentional aerosol release. Reliable quantitative dose-response analyses are required to estimate the magnitude and timeline of potential consequences and the effect of public health intervention strategies under specific scenarios. Analyses of available data from exposures and infections of humans and non-human primates are often contradictory. We review existing quantitative inhalational anthrax dose-response models in light of criteria we propose for a model to be useful and defensible. To satisfy these criteria, we extend an existing mechanistic competing-risks model to create a novel Exposure–Infection–Symptomatic illness–Death (EISD) model and use experimental non-human primate data and human epidemiological data to optimize parameter values. The best fit to these data leads to estimates of a dose leading to infection in 50% of susceptible humans (ID50) of 11,000 spores (95% confidence interval 7,200–17,000), ID10 of 1,700 (1,100–2,600), and ID1 of 160 (100–250). These estimates suggest that use of a threshold to human infection of 600 spores (as suggested in the literature) underestimates the infectivity of low doses, while an existing estimate of a 1% infection rate for a single spore overestimates low dose infectivity. We estimate the median time from exposure to onset of symptoms (incubation period) among untreated cases to be 9.9 days (7.7–13.1) for exposure to ID50, 11.8 days (9.5–15.0) for ID10, and 12.1 days (9.9–15.3) for ID1. Our model is the first to provide incubation period estimates that are independently consistent with data from the largest known human outbreak. This model refines previous estimates of the distribution of early onset cases after a release and provides support for the recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses. PMID:24058320

  20. Physically based estimation of soil water retention from textural data: General framework, new models, and streamlined existing models

    USGS Publications Warehouse

    Nimmo, J.R.; Herkelrath, W.N.; Laguna, Luna A.M.

    2007-01-01

    Numerous models are in widespread use for the estimation of soil water retention from more easily measured textural data. Improved models are needed for better prediction and wider applicability. We developed a basic framework from which new and existing models can be derived to facilitate improvements. Starting from the assumption that every particle has a characteristic dimension R associated uniquely with a matric pressure ?? and that the form of the ??-R relation is the defining characteristic of each model, this framework leads to particular models by specification of geometric relationships between pores and particles. Typical assumptions are that particles are spheres, pores are cylinders with volume equal to the associated particle volume times the void ratio, and that the capillary inverse proportionality between radius and matric pressure is valid. Examples include fixed-pore-shape and fixed-pore-length models. We also developed alternative versions of the model of Arya and Paris that eliminate its interval-size dependence and other problems. The alternative models are calculable by direct application of algebraic formulas rather than manipulation of data tables and intermediate results, and they easily combine with other models (e.g., incorporating structural effects) that are formulated on a continuous basis. Additionally, we developed a family of models based on the same pore geometry as the widely used unsaturated hydraulic conductivity model of Mualem. Predictions of measurements for different suitable media show that some of the models provide consistently good results and can be chosen based on ease of calculations and other factors. ?? Soil Science Society of America. All rights reserved.

  1. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    PubMed

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation

  3. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    PubMed

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted.

  4. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    PubMed

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Assessing Quantitative Literacy in Higher Education: An Overview of Existing Research and Assessments with Recommendations for Next-Generation Assessment. Research Report. ETS RR-14-22

    ERIC Educational Resources Information Center

    Roohr, Katrina Crotts; Graf, Edith Aurora; Liu, Ou Lydia

    2014-01-01

    Quantitative literacy has been recognized as an important skill in the higher education and workforce communities, focusing on problem solving, reasoning, and real-world application. As a result, there is a need by various stakeholders in higher education and workforce communities to evaluate whether college students receive sufficient training on…

  6. Global existence of the three-dimensional viscous quantum magnetohydrodynamic model

    SciTech Connect

    Yang, Jianwei; Ju, Qiangchang

    2014-08-15

    The global-in-time existence of weak solutions to the viscous quantum Magnetohydrodynamic equations in a three-dimensional torus with large data is proved. The global existence of weak solutions to the viscous quantum Magnetohydrodynamic equations is shown by using the Faedo-Galerkin method and weak compactness techniques.

  7. Existence of Natural Frequencies of Systems with Artificial Restraints and Their Convergence in Asymptotic Modelling

    NASA Astrophysics Data System (ADS)

    Ilanko, S.

    2002-08-01

    Rayleigh-Ritz frequencies of the constrained system were found to be bracketed by the frequencies of the asymptotic models with positive and negative restraints. However, the use of artificial restraints with negative stiffness has raised some important questions: would a system with a large negative restraint become unstable, and if so what is the guarantee that the frequencies of the asymptotic model would converge to that of the constrained system? This paper is the result of the author's attempt to answer these questions and gives a proof of existence of natural frequencies for systems with artificial restraints (springs) having positive or negative stiffness coefficients, and their convergence towards constrained systems. Based on Rayleigh's theorem of separation, it has been shown that a vibratory system obtained by the addition of h restraints to an n -degree-of-freedom (d.o.f.) system, where h< n, will have at least ( n÷ h) natural frequencies and modes and that as the magnitude of the stiffness of the added restraints becomes very large, these ( n÷ h) natural frequencies will converge to the ( n÷ h) natural frequencies of a constrained system in which the displacements restrained by the springs are effectively constrained.

  8. Thermodynamic Modeling of a Solid Oxide Fuel Cell to Couple with an Existing Gas Turbine Engine Model

    NASA Technical Reports Server (NTRS)

    Brinson, Thomas E.; Kopasakis, George

    2004-01-01

    The Controls and Dynamics Technology Branch at NASA Glenn Research Center are interested in combining a solid oxide fuel cell (SOFC) to operate in conjunction with a gas turbine engine. A detailed engine model currently exists in the Matlab/Simulink environment. The idea is to incorporate a SOFC model within the turbine engine simulation and observe the hybrid system's performance. The fuel cell will be heated to its appropriate operating condition by the engine s combustor. Once the fuel cell is operating at its steady-state temperature, the gas burner will back down slowly until the engine is fully operating on the hot gases exhausted from the SOFC. The SOFC code is based on a steady-state model developed by The U.S. Department of Energy (DOE). In its current form, the DOE SOFC model exists in Microsoft Excel and uses Visual Basics to create an I-V (current-voltage) profile. For the project's application, the main issue with this model is that the gas path flow and fuel flow temperatures are used as input parameters instead of outputs. The objective is to create a SOFC model based on the DOE model that inputs the fuel cells flow rates and outputs temperature of the flow streams; therefore, creating a temperature profile as a function of fuel flow rate. This will be done by applying the First Law of Thermodynamics for a flow system to the fuel cell. Validation of this model will be done in two procedures. First, for a given flow rate the exit stream temperature will be calculated and compared to DOE SOFC temperature as a point comparison. Next, an I-V curve and temperature curve will be generated where the I-V curve will be compared with the DOE SOFC I-V curve. Matching I-V curves will suggest validation of the temperature curve because voltage is a function of temperature. Once the temperature profile is created and validated, the model will then be placed into the turbine engine simulation for system analysis.

  9. Thermodynamic Modeling of a Solid Oxide Fuel Cell to Couple with an Existing Gas Turbine Engine Model

    NASA Technical Reports Server (NTRS)

    Brinson, Thomas E.; Kopasakis, George

    2004-01-01

    The Controls and Dynamics Technology Branch at NASA Glenn Research Center are interested in combining a solid oxide fuel cell (SOFC) to operate in conjunction with a gas turbine engine. A detailed engine model currently exists in the Matlab/Simulink environment. The idea is to incorporate a SOFC model within the turbine engine simulation and observe the hybrid system's performance. The fuel cell will be heated to its appropriate operating condition by the engine s combustor. Once the fuel cell is operating at its steady-state temperature, the gas burner will back down slowly until the engine is fully operating on the hot gases exhausted from the SOFC. The SOFC code is based on a steady-state model developed by The U.S. Department of Energy (DOE). In its current form, the DOE SOFC model exists in Microsoft Excel and uses Visual Basics to create an I-V (current-voltage) profile. For the project's application, the main issue with this model is that the gas path flow and fuel flow temperatures are used as input parameters instead of outputs. The objective is to create a SOFC model based on the DOE model that inputs the fuel cells flow rates and outputs temperature of the flow streams; therefore, creating a temperature profile as a function of fuel flow rate. This will be done by applying the First Law of Thermodynamics for a flow system to the fuel cell. Validation of this model will be done in two procedures. First, for a given flow rate the exit stream temperature will be calculated and compared to DOE SOFC temperature as a point comparison. Next, an I-V curve and temperature curve will be generated where the I-V curve will be compared with the DOE SOFC I-V curve. Matching I-V curves will suggest validation of the temperature curve because voltage is a function of temperature. Once the temperature profile is created and validated, the model will then be placed into the turbine engine simulation for system analysis.

  10. On the Non-Existence of Optimal Solutions and the Occurrence of "Degeneracy" in the CANDECOMP/PARAFAC Model

    ERIC Educational Resources Information Center

    Krijnen, Wim P.; Dijkstra, Theo K.; Stegeman, Alwin

    2008-01-01

    The CANDECOMP/PARAFAC (CP) model decomposes a three-way array into a prespecified number of "R" factors and a residual array by minimizing the sum of squares of the latter. It is well known that an optimal solution for CP need not exist. We show that if an optimal CP solution does not exist, then any sequence of CP factors monotonically decreasing…

  11. A Quantitative Review of Mentoring Research: Test of a Model

    ERIC Educational Resources Information Center

    Kammeyer-Mueller, John D.; Judge, Timothy A.

    2008-01-01

    Over the past 25 years, numerous researchers have studied the effects of mentoring on work outcomes. However, several reviewers have noted that many of the observed relationships between mentoring and its outcomes are potentially spurious. To summarize this widely dispersed literature, a quantitative research synthesis was conducted focused on…

  12. What Are We Doing When We Translate from Quantitative Models?

    ERIC Educational Resources Information Center

    Critchfield, Thomas S.; Reed, Derek D.

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may…

  13. What Are We Doing When We Translate from Quantitative Models?

    ERIC Educational Resources Information Center

    Critchfield, Thomas S.; Reed, Derek D.

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may…

  14. Combinatorial modeling of chromatin features quantitatively predicts DNA replication timing in Drosophila.

    PubMed

    Comoglio, Federico; Paro, Renato

    2014-01-01

    In metazoans, each cell type follows a characteristic, spatio-temporally regulated DNA replication program. Histone modifications (HMs) and chromatin binding proteins (CBPs) are fundamental for a faithful progression and completion of this process. However, no individual HM is strictly indispensable for origin function, suggesting that HMs may act combinatorially in analogy to the histone code hypothesis for transcriptional regulation. In contrast to gene expression however, the relationship between combinations of chromatin features and DNA replication timing has not yet been demonstrated. Here, by exploiting a comprehensive data collection consisting of 95 CBPs and HMs we investigated their combinatorial potential for the prediction of DNA replication timing in Drosophila using quantitative statistical models. We found that while combinations of CBPs exhibit moderate predictive power for replication timing, pairwise interactions between HMs lead to accurate predictions genome-wide that can be locally further improved by CBPs. Independent feature importance and model analyses led us to derive a simplified, biologically interpretable model of the relationship between chromatin landscape and replication timing reaching 80% of the full model accuracy using six model terms. Finally, we show that pairwise combinations of HMs are able to predict differential DNA replication timing across different cell types. All in all, our work provides support to the existence of combinatorial HM patterns for DNA replication and reveal cell-type independent key elements thereof, whose experimental investigation might contribute to elucidate the regulatory mode of this fundamental cellular process.

  15. Combinatorial Modeling of Chromatin Features Quantitatively Predicts DNA Replication Timing in Drosophila

    PubMed Central

    Comoglio, Federico; Paro, Renato

    2014-01-01

    In metazoans, each cell type follows a characteristic, spatio-temporally regulated DNA replication program. Histone modifications (HMs) and chromatin binding proteins (CBPs) are fundamental for a faithful progression and completion of this process. However, no individual HM is strictly indispensable for origin function, suggesting that HMs may act combinatorially in analogy to the histone code hypothesis for transcriptional regulation. In contrast to gene expression however, the relationship between combinations of chromatin features and DNA replication timing has not yet been demonstrated. Here, by exploiting a comprehensive data collection consisting of 95 CBPs and HMs we investigated their combinatorial potential for the prediction of DNA replication timing in Drosophila using quantitative statistical models. We found that while combinations of CBPs exhibit moderate predictive power for replication timing, pairwise interactions between HMs lead to accurate predictions genome-wide that can be locally further improved by CBPs. Independent feature importance and model analyses led us to derive a simplified, biologically interpretable model of the relationship between chromatin landscape and replication timing reaching 80% of the full model accuracy using six model terms. Finally, we show that pairwise combinations of HMs are able to predict differential DNA replication timing across different cell types. All in all, our work provides support to the existence of combinatorial HM patterns for DNA replication and reveal cell-type independent key elements thereof, whose experimental investigation might contribute to elucidate the regulatory mode of this fundamental cellular process. PMID:24465194

  16. CytoModeler: a tool for bridging large-scale network analysis and dynamic quantitative modeling

    PubMed Central

    Xia, Tian; Van Hemert, John; Dickerson, Julie A.

    2011-01-01

    Summary: CytoModeler is an open-source Java application based on the Cytoscape platform. It integrates large-scale network analysis and quantitative modeling by combining omics analysis on the Cytoscape platform, access to deterministic and stochastic simulators, and static and dynamic network context visualizations of simulation results. Availability: Implemented in Java, CytoModeler runs with Cytoscape 2.6 and 2.7. Binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv/cytomodeler/. Contact: julied@iastate.edu; netscape@iastate.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21511714

  17. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  18. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  19. On the existence and uniqueness of solution to a stochastic 2D Cahn-Hilliard-Navier-Stokes model

    NASA Astrophysics Data System (ADS)

    Tachim Medjo, T.

    2017-07-01

    We study in this article a stochastic version of a coupled Cahn-Hilliard-Navier-Stokes model in a two dimensional bounded domain. The model consists of the Navier-Stokes equations for the velocity, coupled with a Cahn-Hilliard model for the order (phase) parameter. We prove the existence and the uniqueness of a variational solution.

  20. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population.

  1. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models

    PubMed Central

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-01-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. PMID:27591750

  2. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model

    PubMed Central

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362

  3. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model.

    PubMed

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations.

  4. Quantitative, comprehensive, analytical model for magnetic reconnection in Hall magnetohydrodynamics.

    PubMed

    Simakov, Andrei N; Chacón, L

    2008-09-05

    Dissipation-independent, or "fast", magnetic reconnection has been observed computationally in Hall magnetohydrodynamics (MHD) and predicted analytically in electron MHD. However, a quantitative analytical theory of reconnection valid for arbitrary ion inertial lengths, d{i}, has been lacking and is proposed here for the first time. The theory describes a two-dimensional reconnection diffusion region, provides expressions for reconnection rates, and derives a formal criterion for fast reconnection in terms of dissipation parameters and d{i}. It also confirms the electron MHD prediction that both open and elongated diffusion regions allow fast reconnection, and reveals strong dependence of the reconnection rates on d{i}.

  5. Dynamics of childhood growth and obesity development and validation of a quantitative mathematical model

    USDA-ARS?s Scientific Manuscript database

    Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative...

  6. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    ERIC Educational Resources Information Center

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  7. Numerical modeling of flow focusing: Quantitative characterization of the flow regimes

    NASA Astrophysics Data System (ADS)

    Mamet, V.; Namy, P.; Dedulle, J.-M.

    2017-09-01

    Among droplet generation technologies, the flow focusing technique is a major process due to its control, stability, and reproducibility. In this process, one fluid (the continuous phase) interacts with another one (the dispersed phase) to create small droplets. Experimental assays in the literature on gas-liquid flow focusing have shown that different jet regimes can be obtained depending on the operating conditions. However, the underlying physical phenomena remain unclear, especially mechanical interactions between the fluids and the oscillation phenomenon of the liquid. In this paper, based on published studies, a numerical diphasic model has been developed to take into consideration the mechanical interaction between phases, using the Cahn-Hilliard method to monitor the interface. Depending on the liquid/gas inputs and the geometrical parameters, various regimes can be obtained, from a steady state regime to an unsteady one with liquid oscillation. In the dispersed phase, the model enables us to compute the evolution of fluid flow, both in space (size of the recirculation zone) and in time (period of oscillation). The transition between unsteady and stationary regimes is assessed in relation to liquid and gas dimensionless numbers, showing the existence of critical thresholds. This model successfully highlights, qualitatively and quantitatively, the influence of the geometry of the nozzle, in particular, its inner diameter.

  8. Impact assessment of abiotic resources in LCA: quantitative comparison of selected characterization models.

    PubMed

    Rørbech, Jakob T; Vadenbo, Carl; Hellweg, Stefanie; Astrup, Thomas F

    2014-10-07

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247 individual market inventory data sets covering a wide range of societal activities (ecoinvent database v3.0). Log-linear regression analysis was carried out for all pairwise combinations of the 11 methods for identification of correlations in CFs (resources) and total impacts (inventory data sets) between methods. Significant differences in resource coverage were observed (9-73 resources) revealing a trade-off between resource coverage and model complexity. High correlation in CFs between methods did not necessarily manifest in high correlation in total impacts. This indicates that also resource coverage may be critical for impact assessment results. Although no consistent correlations between methods applying similar assessment models could be observed, all methods showed relatively high correlation regarding the assessment of energy resources. Finally, we classify the existing methods into three groups, according to method focus and modeling approach, to aid method selection within LCA.

  9. Interaction of ascending magma with pre-existing crustal structures: Insights from analogue modeling

    NASA Astrophysics Data System (ADS)

    Le Corvec, N.; Menand, T.; Rowland, J. V.

    2010-12-01

    Magma transport through dikes is a major component of the development of basaltic volcanic fields. Basaltic volcanic fields occur in many different tectonic setting, from tensile (e.g., Camargo Volcanic Field, Mexico) to compressive (e.g., Abu Monogenetic Volcano Group, Japan). However, an important observation is that, independently of their tectonic setting, volcanic fields are characterized by numerous volcanic centers showing clustering and lineaments, each volcanic center typically resulting from a single main eruption. Analyses from Auckland Volcanic Field reveal that, for each eruption, magma was transported from its source and reached the surface at different places within the same field, which raises the important question of the relative importance of 1) the self-propagation of magma through pristine rock, as opposed to 2) the control exerted by pre-existing structures. These two mechanisms have different implications for the alignment of volcanic centers in a field as these may reflect either 1) the state of crustal stress dikes would have experienced (with a tendency to propagate perpendicular to the least compressive stress) or 2) the interaction of propagating dikes with pre-existing crustal faults. In the latter case, lineaments might not be related to the syn-emplacement state of stress of the crust. To address this issue, we have carried out a series of analogue experiments in order to constrain the interaction of a propagating magma-filled dike with superficial pre-existing structures (e.g., fracture, fault, joint system). The experiments involved the injection of air (a buoyant magma analogue) into elastic gelatine solids (crustal rock analogues). Cracks were cut into the upper part of the gelatine solids prior to the injection of air to simulate the presence of pre-existing fractures. The volume of the propagating dikes, their distance from pre-existing fractures and the ambient stress field were systematically varied to assess their influence

  10. Global existence of solutions and uniform persistence of a diffusive predator-prey model with prey-taxis

    NASA Astrophysics Data System (ADS)

    Wu, Sainan; Shi, Junping; Wu, Boying

    2016-04-01

    This paper proves the global existence and boundedness of solutions to a general reaction-diffusion predator-prey system with prey-taxis defined on a smooth bounded domain with no-flux boundary condition. The result holds for domains in arbitrary spatial dimension and small prey-taxis sensitivity coefficient. This paper also proves the existence of a global attractor and the uniform persistence of the system under some additional conditions. Applications to models from ecology and chemotaxis are discussed.

  11. Global existence and uniqueness of classical solutions for a generalized quasilinear parabolic equation with application to a glioblastoma growth model.

    PubMed

    Wen, Zijuan; Fan, Meng; Asiri, Asim M; Alzahrani, Ebraheem O; El-Dessoky, Mohamed M; Kuang, Yang

    2017-04-01

    This paper studies the global existence and uniqueness of classical solutions for a generalized quasilinear parabolic equation with appropriate initial and mixed boundary conditions. Under some practicable regularity criteria on diffusion item and nonlinearity, we establish the local existence and uniqueness of classical solutions based on a contraction mapping. This local solution can be continued for all positive time by employing the methods of energy estimates, Lp-theory, and Schauder estimate of linear parabolic equations. A straightforward application of global existence result of classical solutions to a density-dependent diffusion model of in vitro glioblastoma growth is also presented.

  12. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  13. Photon-tissue interaction model for quantitative assessment of biological tissues

    NASA Astrophysics Data System (ADS)

    Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann

    2014-02-01

    In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.

  14. Using Item-Type Performance Covariance to Improve the Skill Model of an Existing Tutor

    ERIC Educational Resources Information Center

    Pavlik, Philip I., Jr.; Cen, Hao; Wu, Lili; Koedinger, Kenneth R.

    2008-01-01

    Using data from an existing pre-algebra computer-based tutor, we analyzed the covariance of item-types with the goal of describing a more effective way to assign skill labels to item-types. Analyzing covariance is important because it allows us to place the skills in a related network in which we can identify the role each skill plays in learning…

  15. Existence Theorems for Vortices in the Aharony-Bergman-Jaferis-Maldacena Model

    NASA Astrophysics Data System (ADS)

    Han, Xiaosen; Yang, Yisong

    2015-01-01

    A series of sharp existence and uniqueness theorems are established for the multiple vortex solutions in the supersymmetric Chern-Simons-Higgs theory formalism of Aharony, Bergman, Jaferis, and Maldacena, for which the Higgs bosons and Dirac fermions lie in the bifundamental representation of the general gauge symmetry group . The governing equations are of the BPS type and derived by Kim, Kim, Kwon, and Nakajima in the mass-deformed framework labeled by a continuous parameter.

  16. Existence and large time behavior for a stochastic model of modified magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Razafimandimby, Paul André; Sango, Mamadou

    2015-10-01

    In this paper, we study a system of nonlinear stochastic partial differential equations describing the motion of turbulent non-Newtonian media in the presence of fluctuating magnetic field. The system is basically obtained by a coupling of the dynamical equations of a non-Newtonian fluids having p-structure and the Maxwell equations. We mainly show the existence of weak martingale solutions and their exponential decay when time goes to infinity.

  17. Quantitative analytical model for magnetic reconnection in hall magnetohydrodynamics

    SciTech Connect

    Simakov, Andrei N

    2008-01-01

    Magnetic reconnection is of fundamental importance for laboratory and naturally occurring plasmas. Reconnection usually develops on time scales which are much shorter than those associated with classical collisional dissipation processes, and which are not fully understood. While such dissipation-independent (or 'fast') reconnection rates have been observed in particle and Hall magnetohydrodynamics (MHD) simulations and predicted analytically in electron MHD, a quantitative analytical theory of fast reconnection valid for arbitrary ion inertial lengths d{sub i} has been lacking. Here we propose such a theory without a guide field. The theory describes two-dimensional magnetic field diffusion regions, provides expressions for the reconnection rates, and derives a formal criterion for fast reconnection in terms of dissipation parameters and di. It also demonstrates that both open X-point and elongated diffusion regions allow dissipation-independent reconnection and reveals a possibility of strong dependence of the reconnection rates on d{sub i}.

  18. A Scoping Review on Models of Integrative Medicine: What Is Known from the Existing Literature?

    PubMed

    Lim, Eun Jin; Vardy, Janette L; Oh, Byeong Sang; Dhillon, Haryana M

    2017-01-01

    Integrative medicine (IM) has been recognized and introduced into Western healthcare systems over the past two decades. Limited information on IM models is available to guide development of an optimal healthcare service. A scoping review was carried out to evaluate IM models in the extant literature, including the distinctive features of each model, to gain an understanding of the core requirements needed to develop models of IM that best meet the needs of patients. Directed content analysis was used to classify the IM models into systems based on coding schema developed from theoretical models and to identify the key concepts of each system. From 1374 articles identified, 45 studies were included. Models were categorized as theoretical and practical and were subdivided into five main models: coexistence, cooptative, cooperative, collaborative, and patient-centered care. They were then divided into three systems-independent, dependent, and integrative-on the basis of the level of involvement of general practitioners and complementary and alternative medicine (CAM) practitioners. The theoretical coexistence and cooptative models have distinct roles for different health care professionals, whereas practical models tend to be ad hoc market-driven services, dependent on patient demand. The cooperative and collaborative models were team-based, with formalized interaction between the two medical paradigms of conventional medicine and CAM, with the practical models focusing on facilitating communication, behaviors, and relationships. The patient-centered care model recognized the philosophy of CAM and required collaboration between disciplines based around patient needs. The focus of IM models has transferred from providers to patients with the independent and integrative systems. This may require a philosophical shift for IM. Further research is required to best understand how to practice patient-centered care in IM services.

  19. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    USDA-ARS?s Scientific Manuscript database

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  20. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  1. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  2. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  3. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  4. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  5. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  6. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  7. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  8. Comparison of existing models to simulate anaerobic digestion of lipid-rich waste.

    PubMed

    Béline, F; Rodriguez-Mendez, R; Girault, R; Bihan, Y Le; Lessard, P

    2017-02-01

    Models for anaerobic digestion of lipid-rich waste taking inhibition into account were reviewed and, if necessary, adjusted to the ADM1 model framework in order to compare them. Experimental data from anaerobic digestion of slaughterhouse waste at an organic loading rate (OLR) ranging from 0.3 to 1.9kgVSm(-3)d(-1) were used to compare and evaluate models. Experimental data obtained at low OLRs were accurately modeled whatever the model thereby validating the stoichiometric parameters used and influent fractionation. However, at higher OLRs, although inhibition parameters were optimized to reduce differences between experimental and simulated data, no model was able to accurately simulate accumulation of substrates and intermediates, mainly due to the wrong simulation of pH. A simulation using pH based on experimental data showed that acetogenesis and methanogenesis were the most sensitive steps to LCFA inhibition and enabled identification of the inhibition parameters of both steps.

  9. Women, birth practitioners, and models of pregnancy and birth-does consensus exist?

    PubMed

    Gibson, Erica

    2014-02-01

    Women have differing beliefs about pregnancy and birth, and will be more suited to one type of practitioner versus another, depending on whether they believe that birth is a natural or a medical event. I hypothesize that if women and their practitioners have similar explanatory models, then the women may experience a better relationship with their practitioners, resulting in greater understanding of birth expectations, leading to improvements in experience and outcomes. In this article I explore how differing beliefs constitute identifiable models that can be distinguished as aligning with the midwifery model versus the medical model of birth.

  10. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.

  11. Quantitative calculation model of dilution ratio based on reaching standard of water function zone

    NASA Astrophysics Data System (ADS)

    Du, Zhong; Dong, Zengchuan; Wu, Huixiu; Yang, Lin

    2017-03-01

    Dilution ratio is an important indicator of water quality assessment, and it’s difficult to calculate quantitatively. This paper proposed quantitative calculation model of dilution ratio based on the permissible pollution bearing capacity model of water function zone. The model contains three parameters of concentration. Particularly, the 1-D model has three river characteristics parameters in addition. Applications of the model are based on the national standard of wastewater discharge concentration and reaching standard concentration. The results show the inverse correlation between the dilution ratio and the C P and C 0, and the positive correlation with C s . The quantitative maximum control standard of dilution ratio is 12.50% by 0-D model and 22.96% by 1-D model. Moreover, we propose to choose the minimum parameter and find the invalid pollution bearing capacity.

  12. Shape Optimization for Navier-Stokes Equations with Algebraic Turbulence Model: Existence Analysis

    SciTech Connect

    Bulicek, Miroslav Haslinger, Jaroslav Malek, Josef Stebel, Jan

    2009-10-15

    We study a shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to an optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by a generalized stationary Navier-Stokes system with nontrivial mixed boundary conditions. In this paper we prove the existence of solutions both to the generalized Navier-Stokes system and to the shape optimization problem.

  13. Existence and time-discretization for the finite-strain Souza-Auricchio constitutive model for shape-memory alloys

    NASA Astrophysics Data System (ADS)

    Frigeri, Sergio; Stefanelli, Ulisse

    2012-01-01

    We prove the global existence of solutions for a shape-memory alloys constitutive model at finite strains. The model has been presented in Evangelista et al. (Int J Numer Methods Eng 81(6):761-785, 2010) and corresponds to a suitable finite-strain version of the celebrated Souza-Auricchio model for SMAs (Auricchio and Petrini in Int J Numer Methods Eng 55:1255-1284, 2002; Souza et al. in J Mech A Solids 17:789-806, 1998). We reformulate the model in purely variational fashion under the form of a rate-independent process. Existence of suitably weak (energetic) solutions to the model is obtained by passing to the limit within a constructive time-discretization procedure.

  14. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    NASA Astrophysics Data System (ADS)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  15. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  16. Had the Planet Mars Not Existed: Kepler's Equant Model and Its Physical Consequences

    ERIC Educational Resources Information Center

    Bracco, C.; Provost, J.P.

    2009-01-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal…

  17. Unified Program Design: Organizing Existing Programming Models, Delivery Options, and Curriculum

    ERIC Educational Resources Information Center

    Rubenstein, Lisa DaVia; Ridgley, Lisa M.

    2017-01-01

    A persistent problem in the field of gifted education has been the lack of categorization and delineation of gifted programming options. To address this issue, we propose Unified Program Design as a structural framework for gifted program models. This framework defines gifted programs as the combination of delivery methods and curriculum models.…

  18. Had the Planet Mars Not Existed: Kepler's Equant Model and Its Physical Consequences

    ERIC Educational Resources Information Center

    Bracco, C.; Provost, J.P.

    2009-01-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal…

  19. A Quantitative Causal Model Theory of Conditional Reasoning

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  20. A Quantitative Causal Model Theory of Conditional Reasoning

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  1. Discrete symmetry enhancement in non-Abelian models and the existence of asymptotic freedom

    NASA Astrophysics Data System (ADS)

    Patrascioiu, Adrian; Seiler, Erhard

    2001-09-01

    We study the universality between a discrete spin model with icosahedral symmetry and the O(3) model in two dimensions. For this purpose we study numerically the renormalized two-point functions of the spin field and the four point coupling constant. We find that those quantities seem to have the same continuum limits in the two models. This has far reaching consequences, because the icosahedron model is not asymptotically free in the sense that the coupling constant proposed by Lüscher, Weisz, and Wolff [Nucl. Phys. B359, 221 (1991)] does not approach zero in the short distance limit. By universality this then also applies to the O(3) model, contrary to the predictions of perturbation theory.

  2. Assessment of Quantitative Precipitation Forecasts from Operational NWP Models (Invited)

    NASA Astrophysics Data System (ADS)

    Sapiano, M. R.

    2010-12-01

    Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.

  3. A quantitative model of application slow-down in multi-resource shared systems

    DOE PAGES

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less

  4. A quantitative model of application slow-down in multi-resource shared systems

    SciTech Connect

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.

  5. Quantitative Methods for Comparing Different Polyline Stream Network Models

    SciTech Connect

    Danny L. Anderson; Daniel P. Ames; Ping Yang

    2014-04-01

    Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.

  6. Detection of cardiomyopathy in an animal model using quantitative autoradiography

    SciTech Connect

    Kubota, K.; Som, P.; Oster, Z.H.; Brill, A.B.; Goodman, M.M.; Knapp, F.F. Jr.; Atkins, H.L.; Sole, M.J.

    1988-10-01

    A fatty acid analog (15-p-iodophenyl)-3,3 dimethyl-pentadecanoic acid (DMIPP) was studied in cardiomyopathic (CM) and normal age-matched Syrian hamsters. Dual tracer quantitative wholebody autoradiography (QARG) with DMIPP and 2-(/sup 14/C(U))-2-deoxy-2-fluoro-D-glucose (FDG) or with FDG and /sup 201/Tl enabled comparison of the uptake of a fatty acid and a glucose analog with the blood flow. These comparisons were carried out at the onset and mid-stage of the disease before congestive failure developed. Groups of CM and normal animals were treated with verapamil from the age of 26 days, before the onset of the disease for 41 days. In CM hearts, areas of decreased DMIPP uptake were seen. These areas were much larger than the decrease in uptake of FDG or /sup 201/Tl. In early CM only minimal changes in FDG or /sup 201/Tl uptake were observed as compared to controls. Treatment of CM-prone animals with verapamil prevented any changes in DMIPP, FDG, or /sup 201/Tl uptake. DMIPP seems to be a more sensitive indicator of early cardiomyopathic changes as compared to /sup 201/Tl or FDG. The trial of DMIPP and SPECT in the diagnosis of human disease, as well as for monitoring the effects of drugs which may prevent it seems to be warranted.

  7. Comparison of approaches for incorporating new information into existing risk prediction models.

    PubMed

    Grill, Sonja; Ankerst, Donna P; Gail, Mitchell H; Chatterjee, Nilanjan; Pfeiffer, Ruth M

    2017-03-30

    We compare the calibration and variability of risk prediction models that were estimated using various approaches for combining information on new predictors, termed 'markers', with parameter information available for other variables from an earlier model, which was estimated from a large data source. We assess the performance of risk prediction models updated based on likelihood ratio (LR) approaches that incorporate dependence between new and old risk factors as well as approaches that assume independence ('naive Bayes' methods). We study the impact of estimating the LR by (i) fitting a single model to cases and non-cases when the distribution of the new markers is in the exponential family or (ii) fitting separate models to cases and non-cases. We also evaluate a new constrained maximum likelihood method. We study updating the risk prediction model when the new data arise from a cohort and extend available methods to accommodate updating when the new data source is a case-control study. To create realistic correlations between predictors, we also based simulations on real data on response to antiviral therapy for hepatitis C. From these studies, we recommend the LR method fit using a single model or constrained maximum likelihood. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Integrating knowledge representation and quantitative modelling in physiology.

    PubMed

    de Bono, Bernard; Hunter, Peter

    2012-08-01

    A wealth of potentially shareable resources, such as data and models, is being generated through the study of physiology by computational means. Although in principle the resources generated are reusable, in practice, few can currently be shared. A key reason for this disparity stems from the lack of consistent cataloguing and annotation of these resources in a standardised manner. Here, we outline our vision for applying community-based modelling standards in support of an automated integration of models across physiological systems and scales. Two key initiatives, the Physiome Project and the European contribution - the Virtual Phsysiological Human Project, have emerged to support this multiscale model integration, and we focus on the role played by two key components of these frameworks, model encoding and semantic metadata annotation. We present examples of biomedical modelling scenarios (the endocrine effect of atrial natriuretic peptide, and the implications of alcohol and glucose toxicity) to illustrate the role that encoding standards and knowledge representation approaches, such as ontologies, could play in the management, searching and visualisation of physiology models, and thus in providing a rational basis for healthcare decisions and contributing towards realising the goal of of personalized medicine. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    PubMed Central

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  10. Digital clocks: simple Boolean models can quantitatively describe circadian systems.

    PubMed

    Akman, Ozgur E; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J; Ghazal, Peter

    2012-09-07

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day-night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we anticipate

  11. Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models

    NASA Astrophysics Data System (ADS)

    de Diesbach, Pablo Brice

    This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.

  12. Efficient Recycled Algorithms for Quantitative Trait Models on Phylogenies

    PubMed Central

    Hiscott, Gordon; Fox, Colin; Parry, Matthew; Bryant, David

    2016-01-01

    We present an efficient and flexible method for computing likelihoods for phenotypic traits on a phylogeny. The method does not resort to Monte Carlo computation but instead blends Felsenstein’s discrete character pruning algorithm with methods for numerical quadrature. It is not limited to Gaussian models and adapts readily to model uncertainty in the observed trait values. We demonstrate the framework by developing efficient algorithms for likelihood calculation and ancestral state reconstruction under Wright’s threshold model, applying our methods to a data set of trait data for extrafloral nectaries across a phylogeny of 839 Fabales species. PMID:27056412

  13. A quantitative model of plasma in Neptune's magnetosphere

    NASA Astrophysics Data System (ADS)

    Richardson, J. D.

    1993-07-01

    A model encompassing plasma transport and energy processes is applied to Neptune's magnetosphere. Starting with profiles of the neutral densities and the electron temperature, the model calculates the plasma density and ion temperature profiles. Good agreement between model results and observations is obtained for a neutral source of 5 x 10 exp 25/s if the diffusion coefficient is 10 exp -8 L3R(N)/2s, plasma is lost at a rate 1/3 that of the strong diffusion rate, and plasma subcorotates in the region outside Triton.

  14. A Quantitative Model of Honey Bee Colony Population Dynamics

    PubMed Central

    Khoury, David S.; Myerscough, Mary R.; Barron, Andrew B.

    2011-01-01

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem. PMID:21533156

  15. A quantitative model of honey bee colony population dynamics.

    PubMed

    Khoury, David S; Myerscough, Mary R; Barron, Andrew B

    2011-04-18

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.

  16. Evaluating Alternative Methodologies for Capturing As-Built Building Information Models (BIM) For Existing Facilities

    DTIC Science & Technology

    2010-08-01

    enables users to work with large point clouds, directly using AutoCAD tools and commands to create 2D drawings and 3D models. • Leica Cyclone...COBIE, 2D and 3D Technology Services/Vendors H. Survey Equipment Listing I. 2D Floor Plan Sample Plans 1 Evaluating Alternative Methodologies...overlain onto the point clouds and model, all of which are represented and navigable in 3D . • 2D Drawings. 2D drawings of facility floor plans

  17. Quantitative comparisons of numerical models of brittle deformation

    NASA Astrophysics Data System (ADS)

    Buiter, S.

    2009-04-01

    Numerical modelling of brittle deformation in the uppermost crust can be challenging owing to the requirement of an accurate pressure calculation, the ability to achieve post-yield deformation and localisation, and the choice of rheology (plasticity law). One way to approach these issues is to conduct model comparisons that can evaluate the effects of different implementations of brittle behaviour in crustal deformation models. We present a comparison of three brittle shortening experiments for fourteen different numerical codes, which use finite element, finite difference, boundary element and distinct element techniques. Our aim is to constrain and quantify the variability among models in order to improve our understanding of causes leading to differences between model results. Our first experiment of translation of a stable sand-like wedge serves as a reference that allows for testing against analytical solutions (e.g., taper angle, root-mean-square velocity and gravitational rate of work). The next two experiments investigate an unstable wedge in a sandbox-like setup which deforms by inward translation of a mobile wall. All models accommodate shortening by in-sequence formation of forward shear zones. We analyse the location, dip angle and spacing of thrusts in detail as previous comparisons have shown that these can be highly variable in numerical and analogue models of crustal shortening and extension. We find that an accurate implementation of boundary friction is important for our models. Our results are encouraging in the overall agreement in their dynamic evolution, but show at the same time the effort that is needed to understand shear zone evolution. GeoMod2008 Team: Markus Albertz, Michele Cooke, Susan Ellis, Taras Gerya, Luke Hodkinson, Kristin Hughes, Katrin Huhn, Boris Kaus, Walter Landry, Bertrand Maillot, Christophe Pascal, Anton Popov, Guido Schreurs, Christopher Beaumont, Tony Crook, Mario Del Castello and Yves Leroy

  18. Quantitative comparisons of numerical models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne

    2010-05-01

    Numerical and laboratory models are often used to investigate the evolution of deformation processes at various scales in crust and lithosphere. In both approaches, the freedom in choice of simulation method, materials and their properties, and deformation laws could affect model outcomes. To assess the role of modelling method and to quantify the variability among models, we have performed a comparison of laboratory and numerical experiments. Here, we present results of 11 numerical codes, which use finite element, finite difference and distinct element techniques. We present three experiments that describe shortening of a sand-like, brittle wedge. The material properties of the numerical ‘sand', the model set-up and the boundary conditions are strictly prescribed and follow the analogue setup as closely as possible. Our first experiment translates a non-accreting wedge with a stable surface slope of 20 degrees. In agreement with critical wedge theory, all models maintain the same surface slope and do not deform. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge in a sandbox-like setup, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. We show that we successfully simulate sandbox-style brittle behaviour using different numerical modelling techniques and that we obtain the same styles of deformation behaviour in numerical and laboratory experiments at similar levels of variability. The GeoMod2008 Numerical Team: Markus Albertz, Michelle Cooke, Tony Crook, David Egholm, Susan Ellis, Taras Gerya, Luke Hodkinson, Boris Kaus, Walter Landry, Bertrand Maillot, Yury Mishin

  19. Can existing climate models be used to study anthropogenic changes in tropical cyclone climate

    SciTech Connect

    Broccoli, A.J.; Manabe, S.

    1990-10-01

    The utility of current generation climate models for studying the influence of greenhouse warming on the tropical storm climatology is examined. A method developed to identify tropical cyclones is applied to a series of model integrations. The global distribution of tropical storms is simulated by these models in a generally realistic manner. While the model resolution is insufficient to reproduce the fine structure of tropical cyclones, the simulated storms become more realistic as resolution is increased. To obtain a preliminary estimate of the response of the tropical cyclone climatology, CO{sub 2} was doubled using models with varying cloud treatments and different horizontal resolutions. In the experiment with prescribed cloudiness, the number of storm-days, a combined measure of the number and duration of tropical storms, undergoes a statistically significant reduction of the number of storm-days is indicated in the experiment with cloud feedback. In both cases the response is independent of horizontal resolution. While the inconclusive nature of these experimental results highlights the uncertainties that remain in examining the details of greenhouse-gas induced climate change, the ability of the models to qualitatively simulate the tropical storm climatology suggests that they are appropriate tools for this problem.

  20. Existence of multiple-stable equilibria for a multi-drug-resistant model of Mycobacterium tuberculosis.

    PubMed

    Gumel, Abba B; Song, Baojun

    2008-07-01

    The resurgence of multi-drug-resistant tuberculosis in some parts of Europe and North America calls for a mathematical study to assess the impact of the emergence and spread of such strain on the global effort to effectively control the burden of tuberculosis. This paper presents a deterministic compartmental model for the transmission dynamics of two strains of tuberculosis, a drug-sensitive (wild) one and a multi-drug-resistant strain. The model allows for the assessment of the treatment of people infected with the wild strain. The qualitative analysis of the model reveals the following. The model has a disease-free equilibrium, which is locally asymptotically stable if a certain threshold, known as the effective reproduction number, is less than unity. Further, the model undergoes a backward bifurcation, where the disease-free equilibrium coexists with a stable endemic equilibrium. One of the main novelties of this study is the numerical illustration of tri-stable equilibria, where the disease-free equilibrium coexists with two stable endemic equilibrium when the aforementioned threshold is less than unity, and a bi-stable setup, involving two stable endemic equilibria, when the effective reproduction number is greater than one. This, to our knowledge, is the first time such dynamical features have been observed in TB dynamics. Finally, it is shown that the backward bifurcation phenomenon in this model arises due to the exogenous re-infection property of tuberculosis.

  1. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    PubMed

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. 3D Numerical Modeling of the Propagation of Hydraulic Fracture at Its Intersection with Natural (Pre-existing) Fracture

    NASA Astrophysics Data System (ADS)

    Dehghan, Ali Naghi; Goshtasbi, Kamran; Ahangari, Kaveh; Jin, Yan; Bahmani, Aram

    2017-02-01

    A variety of 3D numerical models were developed based on hydraulic fracture experiments to simulate the propagation of hydraulic fracture at its intersection with natural (pre-existing) fracture. Since the interaction between hydraulic and pre-existing fractures is a key condition that causes complex fracture patterns, the extended finite element method was employed in ABAQUS software to simulate the problem. The propagation of hydraulic fracture in a fractured medium was modeled in two horizontal differential stresses (Δ σ) of 5e6 and 10e6 Pa considering different strike and dip angles of pre-existing fracture. The rate of energy release was calculated in the directions of hydraulic and pre-existing fractures (G_{{frac}} /G_{{rock}}) at their intersection point to determine the fracture behavior. Opening and crossing were two dominant fracture behaviors during the hydraulic and pre-existing fracture interaction at low and high differential stress conditions, respectively. The results of numerical studies were compared with those of experimental models, showing a good agreement between the two to validate the accuracy of the models. Besides the horizontal differential stress, strike and dip angles of the natural (pre-existing) fracture, the key finding of this research was the significant effect of the energy release rate on the propagation behavior of the hydraulic fracture. This effect was more prominent under the influence of strike and dip angles, as well as differential stress. The obtained results can be used to predict and interpret the generation of complex hydraulic fracture patterns in field conditions.

  3. Derivation of a quantitative minimal model from a detailed elementary-step mechanism supported by mathematical coupling analysis

    NASA Astrophysics Data System (ADS)

    Shaik, O. S.; Kammerer, J.; Gorecki, J.; Lebiedz, D.

    2005-12-01

    Accurate experimental data increasingly allow the development of detailed elementary-step mechanisms for complex chemical and biochemical reaction systems. Model reduction techniques are widely applied to obtain representations in lower-dimensional phase space which are more suitable for mathematical analysis, efficient numerical simulation, and model-based control tasks. Here, we exploit a recently implemented numerical algorithm for error-controlled computation of the minimum dimension required for a still accurate reduced mechanism based on automatic time scale decomposition and relaxation of fast modes. We determine species contributions to the active (slow) dynamical modes of the reaction system and exploit this information in combination with quasi-steady-state and partial-equilibrium approximations for explicit model reduction of a novel detailed chemical mechanism for the Ru-catalyzed light-sensitive Belousov-Zhabotinsky reaction. The existence of a minimum dimension of seven is demonstrated to be mandatory for the reduced model to show good quantitative consistency with the full model in numerical simulations. We derive such a maximally reduced seven-variable model from the detailed elementary-step mechanism and demonstrate that it reproduces quantitatively accurately the dynamical features of the full model within a given accuracy tolerance.

  4. Modeling radiative transfer in real gases: An assessment of existing methods in 2D enclosures

    SciTech Connect

    Goutiere, V.; Charette, A.; Liu, F.

    1999-07-01

    In order to model efficiently the radiative transfer in a real participating gas, various methods have been developed during the last few decades. Each method has its own formulation and leads to different accuracies and computation times. Most of the studies reported in the literature concern specific real gas models, and very few are devoted to an extended comparison of these models. The present study is a 2D assessment of some of the most up-to-date real gas methods: the cumulative-k method (CK), the statistical narrow-band model (SNB), two hybrid SNB-CK methods, the spectral line based weighted sum of gray gases method (SLW) and the exponential wide band model (EWB). Four cases are considered: one homogeneous and isothermal case with a single participating gas (H{sub 2}O), and one homogeneous and non-isothermal case with a mixture of CO{sub 2} and H{sub 2}O. Although the SNB and SNB-CK methods are the most accurate methods, the SLW method seems actually the best deal between accuracy and computation time.

  5. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology

    PubMed Central

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian

    2012-01-01

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼ 10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 106, of normal HSCs. Radiobiologic estimates favor values > 106 for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  6. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology.

    PubMed

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian; Sachs, Rainer K

    2012-05-10

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 10(6), of normal HSCs. Radiobiologic estimates favor values>10(6) for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general.

  7. Building Coalitions To Provide HIV Legal Advocacy Services: Utilizing Existing Disability Models. AIDS Technical Report, No. 5.

    ERIC Educational Resources Information Center

    Harvey, David C.; Ardinger, Robert S.

    This technical report is part of a series on AIDS/HIV (Acquired Immune Deficiency Syndrome/Human Immunodeficiency Virus) and is intended to help link various legal advocacy organizations providing services to persons with mental illness or developmental disabilities. This report discusses strategies to utilize existing disability models for…

  8. Utilization of data estimation via existing models, within a tiered data quality system, for populating species sensitivity distributions

    EPA Science Inventory

    The acquisition toxicity test data of sufficient quality from open literature to fulfill taxonomic diversity requirements can be a limiting factor in the creation of new 304(a) Aquatic Life Criteria. The use of existing models (WebICE and ACE) that estimate acute and chronic eff...

  9. Utilization of data estimation via existing models, within a tiered data quality system, for populating species sensitivity distributions

    EPA Science Inventory

    The acquisition toxicity test data of sufficient quality from open literature to fulfill taxonomic diversity requirements can be a limiting factor in the creation of new 304(a) Aquatic Life Criteria. The use of existing models (WebICE and ACE) that estimate acute and chronic eff...

  10. Building Coalitions To Provide HIV Legal Advocacy Services: Utilizing Existing Disability Models. AIDS Technical Report, No. 5.

    ERIC Educational Resources Information Center

    Harvey, David C.; Ardinger, Robert S.

    This technical report is part of a series on AIDS/HIV (Acquired Immune Deficiency Syndrome/Human Immunodeficiency Virus) and is intended to help link various legal advocacy organizations providing services to persons with mental illness or developmental disabilities. This report discusses strategies to utilize existing disability models for…

  11. Had the planet Mars not existed: Kepler's equant model and its physical consequences

    NASA Astrophysics Data System (ADS)

    Bracco, C.; Provost, J.-P.

    2009-09-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal acceleration with an r-2 dependence on the distance to the Sun. If this dependence is assumed to be universal, Kepler's third law follows immediately. This elementary exercise in kinematics for undergraduates emphasizes the proximity of the equant model coming from ancient Greece with our present knowledge. It adds to its historical interest a didactical relevance concerning, in particular, the discussion of the Aristotelian or Newtonian conception of motion.

  12. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    PubMed

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  13. Magnetospheric mapping with a quantitative geomagnetic field model

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Mead, G. D.

    1975-01-01

    Mapping the magnetosphere on a dipole geomagnetic field model by projecting field and particle observations onto the model is described. High-latitude field lines are traced between the earth's surface and their intersection with either the equatorial plane or a cross section of the geomagnetic tail, and data from low-altitude orbiting satellites are projected along field lines to the outer magnetosphere. This procedure is analyzed, and the resultant mappings are illustrated. Extension of field lines into the geomagnetic tail and low-altitude determination of the polar cap and cusp are presented. It is noted that while there is good agreement among the various data, more particle measurements are necessary to clear up statistical uncertainties and to facilitate comparison of statistical models.

  14. The Terrestrial Magnetopause and Bow Shock: A Comparison of New Data to Existing Models

    NASA Astrophysics Data System (ADS)

    Tanberg, S. J.; Reisenfeld, D. B.; Janzen, P. H.; Petrinec, S. M.

    2011-12-01

    The position and shape of the Earth's magnetopause and bow shock have been studied since the 1950s, mostly in a region within 20 Earth-radii (RE) of the planet, or in the distant tail region (about 200 RE downstream of the Earth). We use in situ data from the Interstellar Boundary Explorer (IBEX) collected at distances between 15 and 55 RE, and at nearly all local times. This data set is unique in that the structure of the magnetopause and bow shock has not been extensively studied between 35 and 55 RE. Therefore, we have used this new data set to consider how well the leading published models match the shape of these boundaries in this unexplored region. The two-and-a-half year collection period, from the beginning of 2009 through mid-2011, also marked a period when the solar wind was remarkably quiet, as the Sun is now just exiting a very deep and prolonged solar minimum. Thus our data set is optimal for comparison to steady-state model predictions. Owing to the unique way in which IBEX collects data, we have implemented an original method for sifting out the magnetosheath signal. In addition, because IBEX is not equipped with a magnetometer or plasma analyzers, we have complemented our magnetosheath data with data from the OMNI 2 collection of solar wind measurements in order to adjust boundary locations to account for changes in the solar wind dynamic pressure and interplanetary magnetic field. We find that the IBEX data set correlates very well with the data from the OMNI 2 collection. As expected, the older models for the magnetopause and bow shock (an ellipse and parabola, respectively) do not fit the data in the 15 to 55 RE region as well as the models presented by Shue et al., 1997, and Chao et al., 2002, though the difference between the bow shock models are miniscule in this region compared to the difference between the magnetopause models.

  15. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model

    PubMed Central

    Winslow, Brent D.; Nguyen, Nam; Venta, Kimberly E.

    2017-01-01

    Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices. PMID:28487671

  16. Improved Mental Acuity Forecasting with an Individualized Quantitative Sleep Model.

    PubMed

    Winslow, Brent D; Nguyen, Nam; Venta, Kimberly E

    2017-01-01

    Sleep impairment significantly alters human brain structure and cognitive function, but available evidence suggests that adults in developed nations are sleeping less. A growing body of research has sought to use sleep to forecast cognitive performance by modeling the relationship between the two, but has generally focused on vigilance rather than other cognitive constructs affected by sleep, such as reaction time, executive function, and working memory. Previous modeling efforts have also utilized subjective, self-reported sleep durations and were restricted to laboratory environments. In the current effort, we addressed these limitations by employing wearable systems and mobile applications to gather objective sleep information, assess multi-construct cognitive performance, and model/predict changes to mental acuity. Thirty participants were recruited for participation in the study, which lasted 1 week. Using the Fitbit Charge HR and a mobile version of the automated neuropsychological assessment metric called CogGauge, we gathered a series of features and utilized the unified model of performance to predict mental acuity based on sleep records. Our results suggest that individuals poorly rate their sleep duration, supporting the need for objective sleep metrics to model circadian changes to mental acuity. Participant compliance in using the wearable throughout the week and responding to the CogGauge assessments was 80%. Specific biases were identified in temporal metrics across mobile devices and operating systems and were excluded from the mental acuity metric development. Individualized prediction of mental acuity consistently outperformed group modeling. This effort indicates the feasibility of creating an individualized, mobile assessment and prediction of mental acuity, compatible with the majority of current mobile devices.

  17. Developing Best Practices for Capturing As-Built Building Information Models (BIM) for Existing Facilities

    DTIC Science & Technology

    2010-08-01

    allowing for the import of 2D drawings to use as an underlay when creating 3D geometry. SketchUp also supported a direct export into Google Earth. The...essentially a 3D modeler that creates 2D polygons. These 2D polygons that are generated can result in many inaccuracies if using the model to calculate...Earth, three-dimensional Portable Document Format ( 3D PDF), and BIM integration technologies, with a focus on task-centered interface and workflows

  18. Benthic-Pelagic Coupling in Biogeochemical and Climate Models: Existing Approaches, Recent developments and Roadblocks

    NASA Astrophysics Data System (ADS)

    Arndt, Sandra

    2016-04-01

    Marine sediments are key components in the Earth System. They host the largest carbon reservoir on Earth, provide the only long term sink for atmospheric CO2, recycle nutrients and represent the most important climate archive. Biogeochemical processes in marine sediments are thus essential for our understanding of the global biogeochemical cycles and climate. They are first and foremost, donor controlled and, thus, driven by the rain of particulate material from the euphotic zone and influenced by the overlying bottom water. Geochemical species may undergo several recycling loops (e.g. authigenic mineral precipitation/dissolution) before they are either buried or diffuse back to the water column. The tightly coupled and complex pelagic and benthic process interplay thus delays recycling flux, significantly modifies the depositional signal and controls the long-term removal of carbon from the ocean-atmosphere system. Despite the importance of this mutual interaction, coupled regional/global biogeochemical models and (paleo)climate models, which are designed to assess and quantify the transformations and fluxes of carbon and nutrients and evaluate their response to past and future perturbations of the climate system either completely neglect marine sediments or incorporate a highly simplified representation of benthic processes. On the other end of the spectrum, coupled, multi-component state-of-the-art early diagenetic models have been successfully developed and applied over the past decades to reproduce observations and quantify sediment-water exchange fluxes, but cannot easily be coupled to pelagic models. The primary constraint here is the high computation cost of simulating all of the essential redox and equilibrium reactions within marine sediments that control carbon burial and benthic recycling fluxes: a barrier that is easily exacerbated if a variety of benthic environments are to be spatially resolved. This presentation provides an integrative overview of

  19. Existing Whole-House Solutions Case Study: Community-Scale Energy Modeling - Southeastern United States

    SciTech Connect

    2014-12-01

    Community-scale energy modeling and testing are useful for determining energy conservation measures that will effectively reduce energy use. To that end, IBACOS analyzed pre-retrofit daily utility data to sort homes by energy consumption, allowing for better targeting of homes for physical audits. Following ASHRAE Guideline 14 normalization procedures, electricity consumption of 1,166 all-electric, production-built homes was modeled. The homes were in two communities: one built in the 1970s and the other in the mid-2000s.

  20. The Existence and Stability Analysis of the Equilibria in Dengue Disease Infection Model

    NASA Astrophysics Data System (ADS)

    Anggriani, N.; Supriatna, A. K.; Soewono, E.

    2015-06-01

    In this paper we formulate an SIR (Susceptible - Infective - Recovered) model of Dengue fever transmission with constant recruitment. We found a threshold parameter K0, known as the Basic Reproduction Number (BRN). This model has two equilibria, disease-free equilibrium and endemic equilibrium. By constructing suitable Lyapunov function, we show that the disease- free equilibrium is globally asymptotic stable whenever BRN is less than one and when it is greater than one, the endemic equilibrium is globally asymptotic stable. Numerical result shows the dynamic of each compartment together with effect of multiple bio-agent intervention as a control to the dengue transmission.

  1. Inference of Quantitative Models of Bacterial Promoters from Time-Series Reporter Gene Data

    PubMed Central

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for

  2. Inference of quantitative models of bacterial promoters from time-series reporter gene data.

    PubMed

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for

  3. Analysis of protein complexes through model-based biclustering of label-free quantitative AP-MS data.

    PubMed

    Choi, Hyungwon; Kim, Sinae; Gingras, Anne-Claude; Nesvizhskii, Alexey I

    2010-06-22

    Affinity purification followed by mass spectrometry (AP-MS) has become a common approach for identifying protein-protein interactions (PPIs) and complexes. However, data analysis and visualization often rely on generic approaches that do not take advantage of the quantitative nature of AP-MS. We present a novel computational method, nested clustering, for biclustering of label-free quantitative AP-MS data. Our approach forms bait clusters based on the similarity of quantitative interaction profiles and identifies submatrices of prey proteins showing consistent quantitative association within bait clusters. In doing so, nested clustering effectively addresses the problem of overrepresentation of interactions involving baits proteins as compared with proteins only identified as preys. The method does not require specification of the number of bait clusters, which is an advantage against existing model-based clustering methods. We illustrate the performance of the algorithm using two published intermediate scale human PPI data sets, which are representative of the AP-MS data generated from mammalian cells. We also discuss general challenges of analyzing and interpreting clustering results in the context of AP-MS data.

  4. Quantitative experimental modelling of fragmentation during explosive volcanism

    NASA Astrophysics Data System (ADS)

    Thordén Haug, Ø.; Galland, O.; Gisler, G.

    2012-04-01

    Phreatomagmatic eruptions results from the violent interaction between magma and an external source of water, such as ground water or a lake. This interaction causes fragmentation of the magma and/or the host rock, resulting in coarse-grained (lapilli) to very fine-grained (ash) material. The products of phreatomagmatic explosions are classically described by their fragment size distribution, which commonly follows power laws of exponent D. Such descriptive approach, however, considers the final products only and do not provide information on the dynamics of fragmentation. The aim of this contribution is thus to address the following fundamental questions. What are the physics that govern fragmentation processes? How fragmentation occurs through time? What are the mechanisms that produce power law fragment size distributions? And what are the scaling laws that control the exponent D? To address these questions, we performed a quantitative experimental study. The setup consists of a Hele-Shaw cell filled with a layer of cohesive silica flour, at the base of which a pulse of pressurized air is injected, leading to fragmentation of the layer of flour. The fragmentation process is monitored through time using a high-speed camera. By varying systematically the air pressure (P) and the thickness of the flour layer (h) we observed two morphologies of fragmentation: "lift off" where the silica flour above the injection inlet is ejected upwards, and "channeling" where the air pierces through the layer along sub-vertical conduit. By building a phase diagram, we show that the morphology is controlled by P/dgh, where d is the density of the flour and g is the gravitational acceleration. To quantify the fragmentation process, we developed a Matlab image analysis program, which calculates the number and sizes of the fragments, and so the fragment size distribution, during the experiments. The fragment size distributions are in general described by power law distributions of

  5. Quantitative analysis of free and bonded forms of volatile sulfur compouds in wine. Basic methodologies and evidences showing the existence of reversible cation-complexed forms.

    PubMed

    Franco-Luesma, Ernesto; Ferreira, Vicente

    2014-09-12

    This paper examines first some basic aspects critical to the analysis of Volatile Sulfur Compounds (VSCs), such as the analytical characteristics of the GC-pFPD system and the stability of the different standard solutions required for a proper calibration. Following, a direct static headspace analytical method for the determination of exclusively free forms of VSCs has been developed. Method repeatability is better than 4%, detection limits for main analytes are below 0.5μgL(-1), and the method dynamic linear range (r(2)>0.99) is expanded by controlling the split ratio in the chromatographic inlet to cover the natural range of occurrence of these compounds in wines. The method gives reliable estimates of headspace concentrations but, as expected, suffers from strong matrix effects with recoveries ranging from 0 to 100% or from 60 to 100 in the cases of H2S and the other mercaptans, respectively. This demonstrates the existence of strong interactions of these compounds with different matrix components. The complexing ability of Cu(2+) and to a lower extent Fe(2+) and Zn(2+) has been experimentally checked. A previously developed method in which the wine is strongly diluted with brine and the volatiles are preconcentrated by HS-SPME, was found to give a reliable estimation of the total amount (free+complexed) of mercaptans, demonstrating that metal-mercaptan complexes are reversible. The comparative analysis of different wines by the two procedures reveals that in normal wines H2S and methanethiol can be complexed at levels above 99%, with averages around 97% for H2S and 75% for methanethiol, while thioethers such as dimethyl sulfide (DMS) are not complexed. Overall, the proposed strategy may be generalized to understand problems caused by VSCs in different matrices. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. What are the unique attributes of potassium that challenge existing nutrient uptake models?

    USDA-ARS?s Scientific Manuscript database

    Soil potassium (K) availability and acquisition by plant root systems are controlled by complex, interacting processes that make it difficult to assess their individual impacts on crop growth. Mechanistic, mathematical models provide an important tool to enhance understanding of these processes, and...

  7. Fatigue assessment of an existing steel bridge by finite element modelling and field measurements

    NASA Astrophysics Data System (ADS)

    Kwad, J.; Alencar, G.; Correia, J.; Jesus, A.; Calçada, R.; Kripakaran, P.

    2017-05-01

    The evaluation of fatigue life of structural details in metallic bridges is a major challenge for bridge engineers. A reliable and cost-effective approach is essential to ensure appropriate maintenance and management of these structures. Typically, local stresses predicted by a finite element model of the bridge are employed to assess the fatigue life of fatigue-prone details. This paper illustrates an approach for fatigue assessment based on measured data for a connection in an old bascule steel bridge located in Exeter (UK). A finite element model is first developed from the design information. The finite element model of the bridge is calibrated using measured responses from an ambient vibration test. The stress time histories are calculated through dynamic analysis of the updated finite element model. Stress cycles are computed through the rainflow counting algorithm, and the fatigue prone details are evaluated using the standard SN curves approach and the Miner’s rule. Results show that the proposed approach can estimate the fatigue damage of a fatigue prone detail in a structure using measured strain data.

  8. National Interlending Systems: A Comparative Study of Existing Systems and Possible Models. Revised.

    ERIC Educational Resources Information Center

    Line, Maurice B.; And Others

    Based on research completed in 1977 and comments on a 1979 preliminary version of this report, this work evaluates current interlending practices among participants in the United Nations Educational, Scientific, and Cultural Organization (UNESCO), and proposes various models of interlibrary lending provision. The paper outlines the elements…

  9. A quantitative risk model for early lifecycle decision making

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  10. Quantitative Description of Monthly Ionospheric Variability in the IRI Model

    NASA Astrophysics Data System (ADS)

    Bilitza, D.

    2004-12-01

    The International Reference Ionosphere (IRI) model provides an empirical specification of the ionospheric climatology at the level of monthly averages. Operational use of the IRI model often requires an estimate of the monthly variability, so that an operator not only knows the expected monthly average value of an ionospheric parameter but also the expected variation around this monthly average. A special IRI Task Force Activity at the International Center for Theoretical Physics (ICTP) in Trieste, Italy has worked on this modeling goal during the last few years using ionosonde data from many stations worldwide focusing primarily on the electron density in the region below the F peak. Other IRI team members have looked at the variability at different heights and have studied the variability seen for the plasma temperatures. We will report on the status and progress of this activity and will discuss the different parameter used for describing ionospheric variability (mean, median, standard deviation, quartiles, deciles) and the planned model implementation. First results will be reported based on the ICTP meetings and the 2003 IRI Workshop in Grahamstown, South Africa.

  11. Unified quantitative model of AMPA receptor trafficking at synapses.

    PubMed

    Czöndör, Katalin; Mondin, Magali; Garcia, Mikael; Heine, Martin; Frischknecht, Renato; Choquet, Daniel; Sibarita, Jean-Baptiste; Thoumine, Olivier R

    2012-02-28

    Trafficking of AMPA receptors (AMPARs) plays a key role in synaptic transmission. However, a general framework integrating the two major mechanisms regulating AMPAR delivery at postsynapses (i.e., surface diffusion and internal recycling) is lacking. To this aim, we built a model based on numerical trajectories of individual AMPARs, including free diffusion in the extrasynaptic space, confinement in the synapse, and trapping at the postsynaptic density (PSD) through reversible interactions with scaffold proteins. The AMPAR/scaffold kinetic rates were adjusted by comparing computer simulations to single-particle tracking and fluorescence recovery after photobleaching experiments in primary neurons, in different conditions of synapse density and maturation. The model predicts that the steady-state AMPAR number at synapses is bidirectionally controlled by AMPAR/scaffold binding affinity and PSD size. To reveal the impact of recycling processes in basal conditions and upon synaptic potentiation or depression, spatially and temporally defined exocytic and endocytic events were introduced. The model predicts that local recycling of AMPARs close to the PSD, coupled to short-range surface diffusion, provides rapid control of AMPAR number at synapses. In contrast, because of long-range diffusion limitations, extrasynaptic recycling is intrinsically slower and less synapse-specific. Thus, by discriminating the relative contributions of AMPAR diffusion, trapping, and recycling events on spatial and temporal bases, this model provides unique insights on the dynamic regulation of synaptic strength.

  12. A quantitative magnetospheric model derived from spacecraft magnetometer data

    NASA Technical Reports Server (NTRS)

    Mead, G. D.; Fairfield, D. H.

    1975-01-01

    The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.

  13. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    ERIC Educational Resources Information Center

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  14. A quantitative magnetospheric model derived from spacecraft magnetometer data

    NASA Technical Reports Server (NTRS)

    Mead, G. D.; Fairfield, D. H.

    1975-01-01

    The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.

  15. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    ERIC Educational Resources Information Center

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  16. Modelling of the shielding capabilities of the existing solid radioactive waste storages at Ignalina NPP.

    PubMed

    Smaizys, Arturas; Poskas, Povilas; Ragaisis, Valdas

    2005-01-01

    There is only one nuclear power plant in Lithuania--Ignalina NPP (INPP). The INPP operates two similar units with design electrical power of 1500 MW. The units were commissioned in 1983 and 1987 respectively. From the beginning of the INPP operation all generated solid radioactive waste was collected and stored at the Soviet type solid radwaste facility located at INPP site. The INPP solid radwaste storage facility consists of four buildings, namely building No. 155, No. 155/1, No. 157 and No. 157/1. The buildings of the INPP solid radwaste storage facility are reinforced concrete structures above ground. State Nuclear Safety Inspectorate (VATESI) has specified that particular safety analysis must be performed for existing radioactive waste storage facilities of the INPP. As part of the safety analysis, shielding capabilities of the walls and roofs of these buildings were analysed. This paper presents radiation shielding analysis of the buildings No. 157 and No. 157/1 that are still in operation. The buildings No. 155 and No. 155/1 are already filled up with the waste and no additional waste loading is expected.

  17. Racial Differences in the Performance of Existing Risk Prediction Models for Incident Type 2 Diabetes: The CARDIA Study

    PubMed Central

    Wellenius, Gregory A.; Carnethon, Mercedes R.; Loucks, Eric B.; Carson, April P.; Luo, Xi; Kiefe, Catarina I.; Gjelsvik, Annie; Gunderson, Erica P.; Eaton, Charles B.; Wu, Wen-Chih

    2016-01-01

    OBJECTIVE In 2010, the American Diabetes Association (ADA) added hemoglobin A1c (A1C) to the guidelines for diagnosing type 2 diabetes. However, existing models for predicting diabetes risk were developed prior to the widespread adoption of A1C. Thus, it remains unknown how well existing diabetes risk prediction models predict incident diabetes defined according to the ADA 2010 guidelines. Accordingly, we examined the performance of an existing diabetes prediction model applied to a cohort of African American (AA) and white adults from the Coronary Artery Risk Development Study in Young Adults (CARDIA). RESEARCH DESIGN AND METHODS We evaluated the performance of the Atherosclerosis Risk in Communities (ARIC) diabetes risk prediction model among 2,456 participants in CARDIA free of diabetes at the 2005–2006 exam and followed for 5 years. We evaluated model discrimination, calibration, and integrated discrimination improvement with incident diabetes defined by ADA 2010 guidelines before and after adding baseline A1C to the prediction model. RESULTS In the overall cohort, re-estimating the ARIC model in the CARDIA cohort resulted in good discrimination for the prediction of 5-year diabetes risk (area under the curve [AUC] 0.841). Adding baseline A1C as a predictor improved discrimination (AUC 0.841 vs. 0.863, P = 0.03). In race-stratified analyses, model discrimination was significantly higher in whites than AA (AUC AA 0.816 vs. whites 0.902; P = 0.008). CONCLUSIONS Addition of A1C to the ARIC diabetes risk prediction model improved performance overall and in racial subgroups. However, for all models examined, discrimination was better in whites than AA. Additional studies are needed to further improve diabetes risk prediction among AA. PMID:26628420

  18. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  19. Existence of complex patterns in the Beddington-DeAngelis predator-prey model.

    PubMed

    Haque, Mainul

    2012-10-01

    The study of reaction-diffusion system constitutes some of the most fascinating developments of late twentieth century mathematics and biology. This article investigates complexity and chaos in the complex patterns dynamics of the original Beddington-DeAngelis predator-prey model which concerns the influence of intra species competition among predators. We investigate the emergence of complex patterns through reaction-diffusion equations in this system. We derive the conditions for the codimension-2 Turing-Hopf, Turing-Saddle-node, and Turing-Transcritical bifurcation, and the codimension-3 Turing-Takens-Bogdanov bifurcation. These bifurcations give rise to very complex patterns that have not been observed in previous predator-prey models. A large variety of different types of long-term behavior, including homogenous distributions and stationary spatial patterns are observed through extensive numerical simulations with experimentally-based parameter values. Finally, a discussion of the ecological implications of the analytical and numerical results concludes the paper.

  20. Quantitative comparisons of satellite observations and cloud models

    NASA Astrophysics Data System (ADS)

    Wang, Fang

    Microwave radiation interacts directly with precipitating particles and can therefore be used to compare microphysical properties found in models with those found in nature. Lower frequencies (< 37 GHz) can detect the emission signals from the raining clouds over radiometrically cold ocean surfaces while higher frequencies (≥ 37 GHz) are more sensitive to the scattering of the precipitating-sized ice particles in the convective storms over high-emissivity land, which lend them particular capabilities for different applications. Both are explored with a different scenario for each case: a comparison of two rainfall retrievals over ocean and a comparison of a cloud model simulation to satellite observations over land. Both the Goddard Profiling algorithm (GPROF) and European Centre for Medium-Range Weather Forecasts (ECMWF) one-dimensional + four-dimensional variational analysis (1D+4D-Var) rainfall retrievals are inversion algorithms based on the Bayes' theorem. Differences stem primarily from the a-priori information. GPROF uses an observationally generated a-priori database while ECMWF 1D-Var uses the model forecast First Guess (FG) fields. The relative similarity in the two approaches means that comparisons can shed light on the differences that are produced by the a-priori information. Case studies have found that differences can be classified into four categories based upon the agreement in the brightness temperatures (Tbs) and in the microphysical properties of Cloud Water Path (CWP) and Rain Water Path (RWP) space. We found a category of special interest in which both retrievals converge to similar Tb through minimization procedures but produce different CWP and RWP. The similarity in Tb can be attributed to comparable Total Water Path (TWP) between the two retrievals while the disagreement in the microphysics is caused by their different degrees of constraint of the cloud/rain ratio by the observations. This situation occurs frequently and takes up 46

  1. Quantitative properties of clustering within modern microscopic nuclear models

    SciTech Connect

    Volya, A.; Tchuvil’sky, Yu. M.

    2016-09-15

    A method for studying cluster spectroscopic properties of nuclear fragmentation, such as spectroscopic amplitudes, cluster form factors, and spectroscopic factors, is developed on the basis of modern precision nuclear models that take into account the mixing of large-scale shell-model configurations. Alpha-cluster channels are considered as an example. A mathematical proof of the need for taking into account the channel-wave-function renormalization generated by exchange terms of the antisymmetrization operator (Fliessbach effect) is given. Examples where this effect is confirmed by a high quality of the description of experimental data are presented. By and large, the method in question extends substantially the possibilities for studying clustering phenomena in nuclei and for improving the quality of their description.

  2. The introspective may achieve more: Enhancing existing Geoscientific models with native-language emulated structural reflection

    DOE PAGES

    Ji, Xinye; Shen, Chaopeng

    2017-09-28

    Geoscientific models manage myriad and increasingly complex data structures as trans-disciplinary models are integrated. They often incur significant redundancy with cross-cutting tasks. Reflection, the ability of a program to inspect and modify its structure and behavior at runtime, is known as a powerful tool to improve code reusability, abstraction, and separation of concerns. Reflection is rarely adopted in high-performance Geoscientific models, especially with Fortran, where it was previously deemed implausible. Practical constraints of language and legacy often limit us to feather-weight, native-language solutions. We demonstrate the usefulness of a structural-reflection-emulating, dynamically-linked metaObjects, gd. We show real-world examples including data structuremore » self-assembly, effortless save/restart and upgrade to parallel I/O, recursive actions and batch operations. We share gd and a derived module that reproduces MATLAB-like structure in Fortran and C++. We suggest that both a gd representation and a Fortran-native representation are maintained to access the data, each for separate purposes. In conclusion, embracing emulated reflection allows generically-written codes that are highly re-usable across projects.« less

  3. Quantitative proteomics by metabolic labeling of model organisms.

    PubMed

    Gouw, Joost W; Krijgsveld, Jeroen; Heck, Albert J R

    2010-01-01

    In the biological sciences, model organisms have been used for many decades and have enabled the gathering of a large proportion of our present day knowledge of basic biological processes and their derailments in disease. Although in many of these studies using model organisms, the focus has primarily been on genetics and genomics approaches, it is important that methods become available to extend this to the relevant protein level. Mass spectrometry-based proteomics is increasingly becoming the standard to comprehensively analyze proteomes. An important transition has been made recently by moving from charting static proteomes to monitoring their dynamics by simultaneously quantifying multiple proteins obtained from differently treated samples. Especially the labeling with stable isotopes has proved an effective means to accurately determine differential expression levels of proteins. Among these, metabolic incorporation of stable isotopes in vivo in whole organisms is one of the favored strategies. In this perspective, we will focus on methodologies to stable isotope label a variety of model organisms in vivo, ranging from relatively simple organisms such as bacteria and yeast to Caenorhabditis elegans, Drosophila, and Arabidopsis up to mammals such as rats and mice. We also summarize how this has opened up ways to investigate biological processes at the protein level in health and disease, revealing conservation and variation across the evolutionary tree of life.

  4. Quantitative Proteomics by Metabolic Labeling of Model Organisms*

    PubMed Central

    Gouw, Joost W.; Krijgsveld, Jeroen; Heck, Albert J. R.

    2010-01-01

    In the biological sciences, model organisms have been used for many decades and have enabled the gathering of a large proportion of our present day knowledge of basic biological processes and their derailments in disease. Although in many of these studies using model organisms, the focus has primarily been on genetics and genomics approaches, it is important that methods become available to extend this to the relevant protein level. Mass spectrometry-based proteomics is increasingly becoming the standard to comprehensively analyze proteomes. An important transition has been made recently by moving from charting static proteomes to monitoring their dynamics by simultaneously quantifying multiple proteins obtained from differently treated samples. Especially the labeling with stable isotopes has proved an effective means to accurately determine differential expression levels of proteins. Among these, metabolic incorporation of stable isotopes in vivo in whole organisms is one of the favored strategies. In this perspective, we will focus on methodologies to stable isotope label a variety of model organisms in vivo, ranging from relatively simple organisms such as bacteria and yeast to Caenorhabditis elegans, Drosophila, and Arabidopsis up to mammals such as rats and mice. We also summarize how this has opened up ways to investigate biological processes at the protein level in health and disease, revealing conservation and variation across the evolutionary tree of life. PMID:19955089

  5. Software applications toward quantitative metabolic flux analysis and modeling.

    PubMed

    Dandekar, Thomas; Fieselmann, Astrid; Majeed, Saman; Ahmed, Zeeshan

    2014-01-01

    Metabolites and their pathways are central for adaptation and survival. Metabolic modeling elucidates in silico all the possible flux pathways (flux balance analysis, FBA) and predicts the actual fluxes under a given situation, further refinement of these models is possible by including experimental isotopologue data. In this review, we initially introduce the key theoretical concepts and different analysis steps in the modeling process before comparing flux calculation and metabolite analysis programs such as C13, BioOpt, COBRA toolbox, Metatool, efmtool, FiatFlux, ReMatch, VANTED, iMAT and YANA. Their respective strengths and limitations are discussed and compared to alternative software. While data analysis of metabolites, calculation of metabolic fluxes, pathways and their condition-specific changes are all possible, we highlight the considerations that need to be taken into account before deciding on a specific software. Current challenges in the field include the computation of large-scale networks (in elementary mode analysis), regulatory interactions and detailed kinetics, and these are discussed in the light of powerful new approaches.

  6. A quantitative assessment of torque-transducer models for magnetoreception

    PubMed Central

    Winklhofer, Michael; Kirschvink, Joseph L.

    2010-01-01

    Although ferrimagnetic material appears suitable as a basis of magnetic field perception in animals, it is not known by which mechanism magnetic particles may transduce the magnetic field into a nerve signal. Provided that magnetic particles have remanence or anisotropic magnetic susceptibility, an external magnetic field will exert a torque and may physically twist them. Several models of such biological magnetic-torque transducers on the basis of magnetite have been proposed in the literature. We analyse from first principles the conditions under which they are viable. Models based on biogenic single-domain magnetite prove both effective and efficient, irrespective of whether the magnetic structure is coupled to mechanosensitive ion channels or to an indirect transduction pathway that exploits the strayfield produced by the magnetic structure at different field orientations. On the other hand, torque-detector models that are based on magnetic multi-domain particles in the vestibular organs turn out to be ineffective. Also, we provide a generic classification scheme of torque transducers in terms of axial or polar output, within which we discuss the results from behavioural experiments conducted under altered field conditions or with pulsed fields. We find that the common assertion that a magnetoreceptor based on single-domain magnetite could not form the basis for an inclination compass does not always hold. PMID:20086054

  7. Afference copy as a quantitative neurophysiological model for consciousness.

    PubMed

    Cornelis, Hugo; Coop, Allan D

    2014-06-01

    Consciousness is a topic of considerable human curiosity with a long history of philosophical analysis and debate. We consider there is nothing particularly complicated about consciousness when viewed as a necessary process of the vertebrate nervous system. Here, we propose a physiological "explanatory gap" is created during each present moment by the temporal requirements of neuronal activity. The gap extends from the time exteroceptive and proprioceptive stimuli activate the nervous system until they emerge into consciousness. During this "moment", it is impossible for an organism to have any conscious knowledge of the ongoing evolution of its environment. In our schematic model, a mechanism of "afference copy" is employed to bridge the explanatory gap with consciously experienced percepts. These percepts are fabricated from the conjunction of the cumulative memory of previous relevant experience and the given stimuli. They are structured to provide the best possible prediction of the expected content of subjective conscious experience likely to occur during the period of the gap. The model is based on the proposition that the neural circuitry necessary to support consciousness is a product of sub/preconscious reflexive learning and recall processes. Based on a review of various psychological and neurophysiological findings, we develop a framework which contextualizes the model and briefly discuss further implications.

  8. Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides

    PubMed Central

    Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  9. Quantitative Risk Modeling of Fire on the International Space Station

    NASA Technical Reports Server (NTRS)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  10. A two-locus model of spatially varying stabilizing or directional selection on a quantitative trait.

    PubMed

    Geroldinger, Ludwig; Bürger, Reinhard

    2014-06-01

    The consequences of spatially varying, stabilizing or directional selection on a quantitative trait in a subdivided population are studied. A deterministic two-locus two-deme model is employed to explore the effects of migration, the degree of divergent selection, and the genetic architecture, i.e., the recombination rate and ratio of locus effects, on the maintenance of genetic variation. The possible equilibrium configurations are determined as functions of the migration rate. They depend crucially on the strength of divergent selection and the genetic architecture. The maximum migration rates are investigated below which a stable fully polymorphic equilibrium or a stable single-locus polymorphism can exist. Under stabilizing selection, but with different optima in the demes, strong recombination may facilitate the maintenance of polymorphism. However usually, and in particular with directional selection in opposite direction, the critical migration rates are maximized by a concentrated genetic architecture, i.e., by a major locus and a tightly linked minor one. Thus, complementing previous work on the evolution of genetic architectures in subdivided populations subject to diversifying selection, it is shown that concentrated architectures may aid the maintenance of polymorphism. Conditions are obtained when this is the case. Finally, the dependence of the phenotypic variance, linkage disequilibrium, and various measures of local adaptation and differentiation on the parameters is elaborated. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  11. A two-locus model of spatially varying stabilizing or directional selection on a quantitative trait

    PubMed Central

    Geroldinger, Ludwig; Bürger, Reinhard

    2014-01-01

    The consequences of spatially varying, stabilizing or directional selection on a quantitative trait in a subdivided population are studied. A deterministic two-locus two-deme model is employed to explore the effects of migration, the degree of divergent selection, and the genetic architecture, i.e., the recombination rate and ratio of locus effects, on the maintenance of genetic variation. The possible equilibrium configurations are determined as functions of the migration rate. They depend crucially on the strength of divergent selection and the genetic architecture. The maximum migration rates are investigated below which a stable fully polymorphic equilibrium or a stable single-locus polymorphism can exist. Under stabilizing selection, but with different optima in the demes, strong recombination may facilitate the maintenance of polymorphism. However usually, and in particular with directional selection in opposite direction, the critical migration rates are maximized by a concentrated genetic architecture, i.e., by a major locus and a tightly linked minor one. Thus, complementing previous work on the evolution of genetic architectures in subdivided populations subject to diversifying selection, it is shown that concentrated architectures may aid the maintenance of polymorphism. Conditions are obtained when this is the case. Finally, the dependence of the phenotypic variance, linkage disequilibrium, and various measures of local adaptation and differentiation on the parameters is elaborated. PMID:24726489

  12. Quantitative Analysis of Strain in Analogue Models During Oblique Convergence

    NASA Astrophysics Data System (ADS)

    Haq, S. S.; Davis, D. M.

    2001-12-01

    Deformation resulting from oblique plate motion can be exceedingly complex, with spatial partitioning of strain that can seem unrelated to the observed plate motion. Most active convergent margins are oblique to their relative plate motion and exhibit some degree of strain partitioning, which has played a significant role in shaping most of them. Despite the ubiquity of oblique convergent margins, the mechanics of how these regions accommodate strain remains poorly understood and the deformation that occurs in them can easily be misinterpreted. It has been recognized that the partitioning of strain occurs in response to a combination of factors, including the degree of margin obliquity to plate motion, the strength of coupling between plates, the shape and length of the margin, and the presence or absence of translating blocks or terranes. However, the degree to which each of these factors control the style of strain accommodation in oblique convergence, remains poorly understood. To quantify the specific mechanical effect of each of these parameters at friction-dominated margins, we perform analogue modeling in which we quantify the strains for a variety of configurations. Using a sequence of digital images we have quantified 2D plane strains and can estimate the vertical strain in our analogue models. This method involves taking a series of digital images with a mega-pixel camera and tracking the motion of a reference grid through the life of the experiment. To calculate strains we first make a correction for the lens distortion, and then using a series of Java and Fortran programs to extract just those pixels corresponding to the reference grid. From this pixel data we calculate the color-weighted centroid of each node point giving us sub-pixel precision on its x, y location. In a reasonably sized experiment a single pixel can correspond to a small fraction of a mm, giving us a high degree of resolution. After deleting spurious grid points and correcting for

  13. Modeling Magnetite Reflectance Spectra Using Hapke Theory and Existing Optical Constants

    NASA Technical Reports Server (NTRS)

    Roush, T. L.; Blewett, D. T.; Cahill, J. T. S.

    2016-01-01

    Magnetite is an accessory mineral found in terrestrial environments, some meteorites, and the lunar surface. The reflectance of magnetite powers is relatively low [1], and this property makes it an analog for other dark Fe- or Ti-bearing components, particularly ilmenite on the lunar surface. The real and imaginary indices of refraction (optical constants) for magnetite are available in the literature [2-3], and online [4]. Here we use these values to calculate the reflectance of particulates and compare these model spectra to reflectance measurements of magnetite available on-line [5].

  14. Concentric Coplanar Capacitive Sensor System with Quantitative Model

    NASA Technical Reports Server (NTRS)

    Bowler, Nicola (Inventor); Chen, Tianming (Inventor)

    2014-01-01

    A concentric coplanar capacitive sensor includes a charged central disc forming a first electrode, an outer annular ring coplanar with and outer to the charged central disc, the outer annular ring forming a second electrode, and a gap between the charged central disc and the outer annular ring. The first electrode and the second electrode may be attached to an insulative film. A method provides for determining transcapacitance between the first electrode and the second electrode and using the transcapacitance in a model that accounts for a dielectric test piece to determine inversely the properties of the dielectric test piece.

  15. Quantitative description of realistic wealth distributions by kinetic trading models

    NASA Astrophysics Data System (ADS)

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x-(1+α) at the high end, where, in general, α is greater than 1 (Pareto’s law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets.

  16. Design of a Representative Low Earth Orbit Satellite to Improve Existing Debris Models

    NASA Technical Reports Server (NTRS)

    Clark, S.; Dietrich, A.; Werremeyer, M.; Fitz-Coy, N.; Liou, J.-C.

    2012-01-01

    This paper summarizes the process and methodologies used in the design of a small-satellite, DebriSat, that represents materials and construction methods used in modern day Low Earth Orbit (LEO) satellites. This satellite will be used in a future hypervelocity impact test with the overall purpose to investigate the physical characteristics of modern LEO satellites after an on-orbit collision. The major ground-based satellite impact experiment used by DoD and NASA in their development of satellite breakup models was conducted in 1992. The target used for that experiment was a Navy Transit satellite (40 cm, 35 kg) fabricated in the 1960 s. Modern satellites are very different in materials and construction techniques from a satellite built 40 years ago. Therefore, there is a need to conduct a similar experiment using a modern target satellite to improve the fidelity of the satellite breakup models. The design of DebriSat will focus on designing and building a next-generation satellite to more accurately portray modern satellites. The design of DebriSat included a comprehensive study of historical LEO satellite designs and missions within the past 15 years for satellites ranging from 10 kg to 5000 kg. This study identified modern trends in hardware, material, and construction practices utilized in recent LEO missions, and helped direct the design of DebriSat.

  17. Water Use Conservation Scenarios for the Mississippi Delta Using an Existing Regional Groundwater Flow Model

    NASA Astrophysics Data System (ADS)

    Barlow, J. R.; Clark, B. R.

    2010-12-01

    The alluvial plain in northwestern Mississippi, locally referred to as the Delta, is a major agricultural area, which contributes significantly to the economy of Mississippi. Land use in this area can be greater than 90 percent agriculture, primarily for growing catfish, corn, cotton, rice, and soybean. Irrigation is needed to smooth out the vagaries of climate and is necessary for the cultivation of rice and for the optimization of corn and soybean. The Mississippi River Valley alluvial (MRVA) aquifer, which underlies the Delta, is the sole source of water for irrigation, and over use of the aquifer has led to water-level declines, particularly in the central region. The Yazoo-Mississippi-Delta Joint Water Management District (YMD), which is responsible for water issues in the 17-county area that makes up the Delta, is directing resources to reduce the use of water through conservation efforts. The U.S. Geological Survey (USGS) recently completed a regional groundwater flow model of the entire Mississippi embayment, including the Mississippi Delta region, to further our understanding of water availability within the embayment system. This model is being used by the USGS to assist YMD in optimizing their conservation efforts by applying various water-use reduction scenarios, either uniformly throughout the Delta, or in focused areas where there have been large groundwater declines in the MRVA aquifer.

  18. The effects of the overline running model of the high-speed trains on the existing lines

    NASA Astrophysics Data System (ADS)

    Qian, Yong-Sheng; Zeng, Jun-Wei; Zhang, Xiao-Long; Wang, Jia-Yuan; Lv, Ting-Ting

    2016-09-01

    This paper studies the effect on the existing railway which is made by the train with 216 km/h high-speed when running across over the existing railway. The influence on the railway carrying capacity which is made by the transportation organization mode of the existing railway is analyzed under different parking modes of high-speed trains as well. In order to further study the departure intervals of the train, the average speed and the delay of the train, an automata model under these four-aspects is established. The results of the research in this paper could serve as the theoretical references to the newly built high-speed railways.

  19. Influence of stone content on soil hydraulic properties: experimental investigation and test of existing model concepts

    NASA Astrophysics Data System (ADS)

    Naseri, Mahyar; Richter, Niels; Iden, Sascha C.; Durner, Wolfgang

    2017-04-01

    Rock fragments in soil, in this contribution referred to as "stones", play an important role for water flow in the subsurface. To successfully model soil hydraulic processes such as evaporation, redistribution and drainage, an understanding of how stones affect soil hydraulic properties (SHP) is crucial. Past investigations on the role of stones in soil have focused on their influence on the water retention curve (WRC) and on saturated hydraulic conductivity Ks, and have led to some simple theoretical models for the influence of stones on effective SHP. However, studies that measure unsaturated SHP directly, i.e., simultaneously the WRC and hydraulic conductivity curve (HCC) are still missing. Also, studies so far were restricted to low or moderate stone contents of less than 40%. We conducted a laboratory study in which we examined the effect of stone content on effective WRC and HCC of stony soils. Mixtures of soil and stones were generated by substituting background soil with stones in weight fractions between 0% (fine material only) to 100% (pure gravel). Stone sizes were 2-5 mm and 7-15 mm, respectively, and background soils were Sand and Sandy Loam. Packed samples were fully saturated under vacuum and subsequently subjected to evaporation in the laboratory. All experiments were done in three replicates. The soil hydraulic properties were determined by the simplified evaporation method using the UMS HYPROP setup. Questions were whether the applied measurement methodology is applicable to derive the SHP of the mixtures and how the gradual increase of stone content will affect the SHP, particularly the HCC. The applied methodology was successful in identifying effective SHP with a high precision over the full moisture range. WRC and HCC were successfully obtained by HYPROP, even for pure gravel with a size of 7-15 mm. WRCs changed qualitatively in the expected manner, i.e., an increase of stone content reduced porosity and soil water content at all suctions

  20. Modeling Aseismic and Seismic Slip Induced by Fluid Injection on Pre-existing Faults Governed by Rate-and-state Friction

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Harrington, R. M.; Deng, K.; Larochelle, S.

    2015-12-01

    Pore fluid pressure evolution on pre-existing faults in the vicinity of fluid injection activity has been postulated as a key factor for inducing both moderate size earthquakes and aseismic slip. In this study, we develop a numerical model incorporating rate-and-state friction properties to investigate fault slip initiated by various perturbations, including fluid injection and transient dynamic stress changes. In the framework of rate-and-state friction, external stress perturbations and their spatiotemporal variation can be coupled to fault frictional strength evolution in a single computational procedure. Hence it provides a quantitative understanding of the source processes (i.e., slip rate, rupture area, triggering threshold) of a spectrum of slip modes under the influence of anthropogenic and natural perturbations. Preliminary results show both the peak and cumulative Coulomb stress change values can affect the transition from aseismic to seismic slip and the amount of slip. We plan to apply the physics-based slip model to induced earthquakes in western Canada sedimentary basins. In particular, we will focus on the Fox Creek sequences in north Alberta, where two earthquakes of ML4.4 (2015/01/23) and Mw4.6 (2015/06/13) were potentially induced by nearby hydraulic fracturing activity. The geometry of the seismogenic faults of the two events will be constrained by relocated seismicity as well as their focal mechanism solutions. Rate-and-state friction parameters and ambient stress conditions will be constrained by identifying dynamic triggering criteria using a matched-filter approach. A poroelastic model will be used to estimate the pore pressure history resolved onto the fault plane due to fluid injection. By comparing modeled earthquake source parameters to those estimated from seismic analysis, we aim to quantitatively discern the nucleation conditions of injection-induced versus dynamically triggered earthquakes, and aseismic versus seismic slip modes.

  1. A quantitative and dynamic model for plant stem cell regulation.

    PubMed

    Geier, Florian; Lohmann, Jan U; Gerstung, Moritz; Maier, Annette T; Timmer, Jens; Fleck, Christian

    2008-01-01

    Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.

  2. A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction

    ERIC Educational Resources Information Center

    Cohen, Anat; Nachmias, Rafi

    2006-01-01

    This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…

  3. A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction

    ERIC Educational Resources Information Center

    Cohen, Anat; Nachmias, Rafi

    2006-01-01

    This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…

  4. Evaluation of a quantitative phosphorus transport model for potential improvement of southern phosphorus indices

    USDA-ARS?s Scientific Manuscript database

    Due to a shortage of available phosphorus (P) loss data sets, simulated data from a quantitative P transport model could be used to evaluate a P-index. However, the model would need to accurately predict the P loss data sets that are available. The objective of this study was to compare predictions ...

  5. Corequisite Model: An Effective Strategy for Remediation in Freshmen Level Quantitative Reasoning Course

    ERIC Educational Resources Information Center

    Kashyap, Upasana; Mathew, Santhosh

    2017-01-01

    The purpose of this study was to compare students' performances in a freshmen level quantitative reasoning course (QR) under three different instructional models. A cohort of 155 freshmen students was placed in one of the three models: needing a prerequisite course, corequisite (students enroll simultaneously in QR course and a course that…

  6. Quantitative Modeling of Entangled Polymer Rheology: Experiments, Tube Models and Slip-Link Simulations

    NASA Astrophysics Data System (ADS)

    Desai, Priyanka Subhash

    Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd

  7. 18FDG synthesis and supply: a journey from existing centralized to future decentralized models.

    PubMed

    Uz Zaman, Maseeh; Fatima, Nosheen; Sajjad, Zafar; Zaman, Unaiza; Tahseen, Rabia; Zaman, Areeba

    2014-01-01

    Positron emission tomography (PET) as the functional component of current hybrid imaging (like PET/ CT or PET/MRI) seems to dominate the horizon of medical imaging in coming decades. 18Flourodeoxyglucose (18FDG) is the most commonly used probe in oncology and also in cardiology and neurology around the globe. However, the major capital cost and exorbitant running expenditure of low to medium energy cyclotrons (about 20 MeV) and radiochemistry units are the seminal reasons of low number of cyclotrons but mushroom growth pattern of PET scanners. This fact and longer half-life of 18F (110 minutes) have paved the path of a centralized model in which 18FDG is produced by commercial PET radiopharmacies and the finished product (multi-dose vial with tungsten shielding) is dispensed to customers having only PET scanners. This indeed reduced the cost but has limitations of dependence upon timely arrival of daily shipments as delay caused by any reason results in cancellation or rescheduling of the PET procedures. In recent years, industry and academia have taken a step forward by producing low energy, table top cyclotrons with compact and automated radiochemistry units (Lab- on-Chip). This decentralized strategy enables the users to produce on-demand doses of PET probe themselves at reasonably low cost using an automated and user-friendly technology. This technological development would indeed provide a real impetus to the availability of complete set up of PET based molecular imaging at an affordable cost to the developing countries.

  8. A quantitative model of the biogeochemical transport of iodine

    NASA Astrophysics Data System (ADS)

    Weng, H.; Ji, Z.; Weng, J.

    2010-12-01

    Iodine deficiency disorders (IDD) are among the world’s most prevalent public health problems yet preventable by dietary iodine supplements. To better understand the biogeochemical behavior of iodine and to explore safer and more efficient ways of iodine supplementation as alternatives to iodized salt, we studied the behavior of iodine as it is absorbed, accumulated and released by plants. Using Chinese cabbage as a model system and the 125I tracing technique, we established that plants uptake exogenous iodine from soil, most of which are transported to the stem and leaf tissue. The level of absorption of iodine by plants is dependent on the iodine concentration in soil, as well as the soil types that have different iodine-adsorption capacity. The leaching experiment showed that the remainder soil content of iodine after leaching is determined by the iodine-adsorption ability of the soil and the pH of the leaching solution, but not the volume of leaching solution. Iodine in soil and plants can also be released to the air via vaporization in a concentration-dependent manner. This study provides a scientific basis for developing new methods to prevent IDD through iodized vegetable production.

  9. Toward a quantitative model of metamorphic nucleation and growth

    NASA Astrophysics Data System (ADS)

    Gaidies, F.; Pattison, D. R. M.; de Capitani, C.

    2011-11-01

    The formation of metamorphic garnet during isobaric heating is simulated on the basis of the classical nucleation and reaction rate theories and Gibbs free energy dissipation in a multi-component model system. The relative influences are studied of interfacial energy, chemical mobility at the surface of garnet clusters, heating rate and pressure on interface-controlled garnet nucleation and growth kinetics. It is found that the interfacial energy controls the departure from equilibrium required to nucleate garnet if attachment and detachment processes at the surface of garnet limit the overall crystallization rate. The interfacial energy for nucleation of garnet in a metapelite of the aureole of the Nelson Batholith, BC, is estimated to range between 0.03 and 0.3 J/m2 at a pressure of ca. 3,500 bar. This corresponds to a thermal overstep of the garnet-forming reaction of ca. 30°C. The influence of the heating rate on thermal overstepping is negligible. A significant feedback is predicted between chemical fractionation associated with garnet formation and the kinetics of nucleation and crystal growth of garnet giving rise to its lognormal—shaped crystal size distribution.

  10. Quantitative nonlinearity analysis of model-scale jet noise

    NASA Astrophysics Data System (ADS)

    Miller, Kyle G.; Reichman, Brent O.; Gee, Kent L.; Neilsen, Tracianne B.; Atchley, Anthony A.

    2015-10-01

    The effects of nonlinearity on the power spectrum of jet noise can be directly compared with those of atmospheric absorption and geometric spreading through an ensemble-averaged, frequency-domain version of the generalized Burgers equation (GBE) [B. O. Reichman et al., J. Acoust. Soc. Am. 136, 2102 (2014)]. The rate of change in the sound pressure level due to the nonlinearity, in decibels per jet nozzle diameter, is calculated using a dimensionless form of the quadspectrum of the pressure and the squared-pressure waveforms. In this paper, this formulation is applied to atmospheric propagation of a spherically spreading, initial sinusoid and unheated model-scale supersonic (Mach 2.0) jet data. The rate of change in level due to nonlinearity is calculated and compared with estimated effects due to absorption and geometric spreading. Comparing these losses with the change predicted due to nonlinearity shows that absorption and nonlinearity are of similar magnitude in the geometric far field, where shocks are present, which causes the high-frequency spectral shape to remain unchanged.

  11. Canalization, genetic assimilation and preadaptation. A quantitative genetic model.

    PubMed Central

    Eshel, I; Matessi, C

    1998-01-01

    We propose a mathematical model to analyze the evolution of canalization for a trait under stabilizing selection, where each individual in the population is randomly exposed to different environmental conditions, independently of its genotype. Without canalization, our trait (primary phenotype) is affected by both genetic variation and environmental perturbations (morphogenic environment). Selection of the trait depends on individually varying environmental conditions (selecting environment). Assuming no plasticity initially, morphogenic effects are not correlated with the direction of selection in individual environments. Under quite plausible assumptions we show that natural selection favors a system of canalization that tends to repress deviations from the phenotype that is optimal in the most common selecting environment. However, many experimental results, dating back to Waddington and others, indicate that natural canalization systems may fail under extreme environments. While this can be explained as an impossibility of the system to cope with extreme morphogenic pressure, we show that a canalization system that tends to be inactivated in extreme environments is even more advantageous than rigid canalization. Moreover, once this adaptive canalization is established, the resulting evolution of primary phenotype enables substantial preadaptation to permanent environmental changes resembling extreme niches of the previous environment. PMID:9691063

  12. A quantitative confidence signal detection model: 1. Fitting psychometric functions

    PubMed Central

    Yi, Yongwoo

    2016-01-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. PMID:26763777

  13. A quantitative confidence signal detection model: 1. Fitting psychometric functions.

    PubMed

    Yi, Yongwoo; Merfeld, Daniel M

    2016-04-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. Copyright © 2016 the American Physiological Society.

  14. High-response piezoelectricity modeled quantitatively near a phase boundary

    NASA Astrophysics Data System (ADS)

    Newns, Dennis M.; Kuroda, Marcelo A.; Cipcigan, Flaviu S.; Crain, Jason; Martyna, Glenn J.

    2017-01-01

    Interconversion of mechanical and electrical energy via the piezoelectric effect is fundamental to a wide range of technologies. The discovery in the 1990s of giant piezoelectric responses in certain materials has therefore opened new application spaces, but the origin of these properties remains a challenge to our understanding. A key role is played by the presence of a structural instability in these materials at compositions near the "morphotropic phase boundary" (MPB) where the crystal structure changes abruptly and the electromechanical responses are maximal. Here we formulate a simple, unified theoretical description which accounts for extreme piezoelectric response, its observation at compositions near the MPB, accompanied by ultrahigh dielectric constant and mechanical compliances with rather large anisotropies. The resulting model, based upon a Landau free energy expression, is capable of treating the important domain engineered materials and is found to be predictive while maintaining simplicity. It therefore offers a general and powerful means of accounting for the full set of signature characteristics in these functional materials including volume conserving sum rules and strong substrate clamping effects.

  15. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  16. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum.

    PubMed

    Milic, Natasa M; Trajkovic, Goran Z; Bukumiric, Zoran M; Cirkovic, Andja; Nikolic, Ivan M; Milin, Jelena S; Milic, Nikola V; Savic, Marko D; Corac, Aleksandar M; Marinkovic, Jelena M; Stanisavljevic, Dejana M

    2016-01-01

    Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013-14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001). This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional classroom training

  17. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    PubMed

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34

  18. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum

    PubMed Central

    Milic, Natasa M.; Trajkovic, Goran Z.; Bukumiric, Zoran M.; Cirkovic, Andja; Nikolic, Ivan M.; Milin, Jelena S.; Milic, Nikola V.; Savic, Marko D.; Corac, Aleksandar M.; Marinkovic, Jelena M.; Stanisavljevic, Dejana M.

    2016-01-01

    Background Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. Methods This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013–14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Results Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001). Conclusion This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient

  19. Evaluation of Modeled and Measured Energy Savings in Existing All Electric Public Housing in the Pacific Northwest

    SciTech Connect

    Gordon, Andrew; Lubliner, Michael; Howard, Luke; Kunkle, Rick; Salzberg, Emily

    2014-04-01

    This project analyzes the cost effectiveness of energy savings measures installed by a large public housing authority in Salishan, a community in Tacoma Washington. Research focuses on the modeled and measured energy usage of the first six phases of construction, and compares the energy usage of those phases to phase 7. Market-ready energy solutions were also evaluated to improve the efficiency of affordable housing for new and existing (built since 2001) affordable housing in the marine climate of Washington State.

  20. The Lightning Rod Model: a Genesis for Quantitative Near-Field Spectroscopy

    NASA Astrophysics Data System (ADS)

    McLeod, Alexander; Andreev, Gregory; Dominguez, Gerardo; Thiemens, Mark; Fogler, Michael; Basov, D. N.

    2013-03-01

    Near-field infrared spectroscopy has the proven ability to resolve optical contrasts in materials at deeply sub-wavelength scales across a broad range of infrared frequencies. In principle, the technique enables sub-diffractional optical identification of chemical compositions within nanostructured and naturally heterogeneous samples. However current models of probe-sample optical interaction, while qualitatively descriptive, cannot quantitatively explain infrared near-field spectra, especially for strongly resonant sample materials. We present a new first-principles model of near-field interaction, and demonstrate its superb agreement with infrared near-field spectra measured for thin films of silicon dioxide and the strongly phonon-resonant material silicon carbide. Using this model we reveal the role of probe geometry and surface mode dispersion in shaping the measured near-field spectrum, establishing its quantitative relationship with the dielectric properties of the sample. This treatment offers a route to the quantitative determination of optical constants at the nano-scale.

  1. Modeling approaches for qualitative and semi-quantitative analysis of cellular signaling networks

    PubMed Central

    2013-01-01

    A central goal of systems biology is the construction of predictive models of bio-molecular networks. Cellular networks of moderate size have been modeled successfully in a quantitative way based on differential equations. However, in large-scale networks, knowledge of mechanistic details and kinetic parameters is often too limited to allow for the set-up of predictive quantitative models. Here, we review methodologies for qualitative and semi-quantitative modeling of cellular signal transduction networks. In particular, we focus on three different but related formalisms facilitating modeling of signaling processes with different levels of detail: interaction graphs, logical/Boolean networks, and logic-based ordinary differential equations (ODEs). Albeit the simplest models possible, interaction graphs allow the identification of important network properties such as signaling paths, feedback loops, or global interdependencies. Logical or Boolean models can be derived from interaction graphs by constraining the logical combination of edges. Logical models can be used to study the basic input–output behavior of the system under investigation and to analyze its qualitative dynamic properties by discrete simulations. They also provide a suitable framework to identify proper intervention strategies enforcing or repressing certain behaviors. Finally, as a third formalism, Boolean networks can be transformed into logic-based ODEs enabling studies on essential quantitative and dynamic features of a signaling network, where time and states are continuous. We describe and illustrate key methods and applications of the different modeling formalisms and discuss their relationships. In particular, as one important aspect for model reuse, we will show how these three modeling approaches can be combined to a modeling pipeline (or model hierarchy) allowing one to start with the simplest representation of a signaling network (interaction graph), which can later be refined to

  2. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study.

    PubMed

    Borji, Ali; Sihite, Dicky N; Itti, Laurent

    2013-01-01

    Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as "visual saliency." Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.

  3. Modelling of structural complexity in sedimentary basins: The role of pre-existing faults in thrust tectonics

    NASA Astrophysics Data System (ADS)

    Sassi, W.; Colletta, B.; Balé, P.; Paquereau, T.

    1993-11-01

    Analogue and numerical models have been used to study the role of pre-existing faults in compressive regimes. From a theoretical point of view, reactivation is mainly controlled by fault attitude, stress regime and frictional properties of fault planes. In scaled-down sandbox experiments, precut faults are introduced in the homogeneous granular media with a nylon wire which is forced through the sand cake producing a thin planar disturbed zone. Systematic experiments of thrust inversion with various dips and strikes of such planar discontinuities have been modelled. Comparison of experimental results with theoretical diagrams indicate that disturbed zones have a friction angle which is 10-20% lower than the homogeneous sand and that the compressive regime in the sandbox has a shape factor close to 0.4. The static analysis of fault reactivation is in accordance with the experimental observations except for pre-existing faults dipping at very low angle. However, numerical modelling using the Udec code shows that low-angle faults can be reactivated as a result of stress concentration in the lower part of the fault. In addition, sandbox experiments indicate that in thrust systems, reactivation of pre-existing faults is not only dependent on their attitude but also on their spacing and location relative to the thrust system.

  4. Quantitative structure-interplanar spacing models based on montmorillonite modified with quaternary alkylammonium salts

    NASA Astrophysics Data System (ADS)

    Grigorev, V. Yu.; Grigoreva, L. D.; Salimov, I. E.

    2017-08-01

    Models of the quantitative structure-property relationship (QSPR) between the structure of 19 alkylammonium cations and the basal distances ( d 001) of Na+ montmorillonite modified with these cations are created. Seven descriptors characterizing intermolecular interaction, including new fractal descriptors, are used to describe the structure of the compounds. It is shown that equations obtained via multiple linear regression have good statistical characteristics, and the calculated d 001 values agree with the results from experimental studies. The quantitative contribution from hydrogen bonds to the formation of interplanar spacing in Na+ montmorillonite is found by analyzing the QSPR models.

  5. PHYSIOLOGICALLY-BASED PHARMACOKINETIC ( PBPK ) MODEL FOR METHYL TERTIARY BUTYL ETHER ( MTBE ): A REVIEW OF EXISTING MODELS

    EPA Science Inventory

    MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...

  6. PHYSIOLOGICALLY-BASED PHARMACOKINETIC ( PBPK ) MODEL FOR METHYL TERTIARY BUTYL ETHER ( MTBE ): A REVIEW OF EXISTING MODELS

    EPA Science Inventory

    MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...

  7. The Power of a Good Idea: Quantitative Modeling of the Spread of Ideas from Epidemiological Models

    SciTech Connect

    Bettencourt, L. M. A.; Cintron-Arias, A.; Kaiser, D. I.; Castillo-Chavez, C.

    2005-05-05

    The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.

  8. Testing the influence of vertical, pre-existing joints on normal faulting using analogue and 3D discrete element models (DEM)

    NASA Astrophysics Data System (ADS)

    Kettermann, Michael; von Hagke, Christoph; Virgo, Simon; Urai, Janos L.

    2015-04-01

    Brittle rocks are often affected by different generations of fractures that influence each other. We study pre-existing vertical joints followed by a faulting event. Understanding the effect of these interactions on fracture/fault geometries as well as the development of dilatancy and the formation of cavities as potential fluid pathways is crucial for reservoir quality prediction and production. Our approach combines scaled analogue and numerical modeling. Using cohesive hemihydrate powder allows us to create open fractures prior to faulting. The physical models are reproduced using the ESyS-Particle discrete element Modeling Software (DEM), and different parameters are investigated. Analogue models were carried out in a manually driven deformation box (30x28x20 cm) with a 60° dipping pre-defined basement fault and 4.5 cm of displacement. To produce open joints prior to faulting, sheets of paper were mounted in the box to a depth of 5 cm at a spacing of 2.5 cm. Powder was then sieved into the box, embedding the paper almost entirely (column height of 19 cm), and the paper was removed. We tested the influence of different angles between the strike of the basement fault and the joint set (0°, 4°, 8°, 12°, 16°, 20°, and 25°). During deformation we captured structural information by time-lapse photography that allows particle imaging velocimetry analyses (PIV) to detect localized deformation at every increment of displacement. Post-mortem photogrammetry preserves the final 3-dimensional structure of the fault zone. We observe that no faults or fractures occur parallel to basement-fault strike. Secondary fractures are mostly oriented normal to primary joints. At the final stage of the experiments we analyzed semi-quantitatively the number of connected joints, number of secondary fractures, degree of segmentation (i.e. number of joints accommodating strain), damage zone width, and the map-view area fraction of open gaps. Whereas the area fraction does not change

  9. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    NASA Technical Reports Server (NTRS)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  10. Quantitative petri net model of gene regulated metabolic networks in the cell.

    PubMed

    Chen, Ming; Hofestädt, Ralf

    2011-01-01

    A method to exploit hybrid Petri nets (HPN) for quantitatively modeling and simulating gene regulated metabolic networks is demonstrated. A global kinetic modeling strategy and Petri net modeling algorithm are applied to perform the bioprocess functioning and model analysis. With the model, the interrelations between pathway analysis and metabolic control mechanism are outlined. Diagrammatical results of the dynamics of metabolites are simulated and observed by implementing a HPN tool, Visual Object Net ++. An explanation of the observed behavior of the urea cycle is proposed to indicate possibilities for metabolic engineering and medical care. Finally, the perspective of Petri nets on modeling and simulation of metabolic networks is discussed.

  11. A quantitative model for the rate-limiting process of UGA alternative assignments to stop and selenocysteine codons

    PubMed Central

    Chuang, Kai-Neng; Yen, Hsueh-Chi S.

    2017-01-01

    Ambiguity in genetic codes exists in cases where certain stop codons are alternatively used to encode non-canonical amino acids. In selenoprotein transcripts, the UGA codon may either represent a translation termination signal or a selenocysteine (Sec) codon. Translating UGA to Sec requires selenium and specialized Sec incorporation machinery such as the interaction between the SECIS element and SBP2 protein, but how these factors quantitatively affect alternative assignments of UGA has not been fully investigated. We developed a model simulating the UGA decoding process. Our model is based on the following assumptions: (1) charged Sec-specific tRNAs (Sec-tRNASec) and release factors compete for a UGA site, (2) Sec-tRNASec abundance is limited by the concentrations of selenium and Sec-specific tRNA (tRNASec) precursors, and (3) all synthesis reactions follow first-order kinetics. We demonstrated that this model captured two prominent characteristics observed from experimental data. First, UGA to Sec decoding increases with elevated selenium availability, but saturates under high selenium supply. Second, the efficiency of Sec incorporation is reduced with increasing selenoprotein synthesis. We measured the expressions of four selenoprotein constructs and estimated their model parameters. Their inferred Sec incorporation efficiencies did not correlate well with their SECIS-SBP2 binding affinities, suggesting the existence of additional factors determining the hierarchy of selenoprotein synthesis under selenium deficiency. This model provides a framework to systematically study the interplay of factors affecting the dual definitions of a genetic codon. PMID:28178267

  12. Pre-existing tolerance shapes the outcome of mucosal allergen sensitization in a murine model of asthma.

    PubMed

    Chapman, Timothy J; Emo, Jason A; Knowlden, Sara A; Rezaee, Fariba; Georas, Steve N

    2013-10-15

    Recent published studies have highlighted the complexity of the immune response to allergens, and the various asthma phenotypes that arise as a result. Although the interplay of regulatory and effector immune cells responding to allergen would seem to dictate the nature of the asthmatic response, little is known regarding how tolerance versus reactivity to allergen occurs in the lung. The vast majority of mouse models study allergen encounter in naive animals, and therefore exclude the possibility that previous encounters with allergen may influence future sensitization. To address this, we studied sensitization to the model allergen OVA in mice in the context of pre-existing tolerance to OVA. Allergen sensitization by either systemic administration of OVA with aluminum hydroxide or mucosal administration of OVA with low-dose LPS was suppressed in tolerized animals. However, higher doses of LPS induced a mixed Th2 and Th17 response to OVA in both naive and tolerized mice. Of interest, tolerized mice had more pronounced Th17-type inflammation than did naive mice receiving the same sensitization, suggesting pre-existing tolerance altered the inflammatory phenotype. These data show that a pre-existing tolerogenic immune response to allergen can affect subsequent sensitization in the lung. These findings have potential significance for understanding late-onset disease in individuals with severe asthma.

  13. Pre-existing Tolerance Shapes the Outcome of Mucosal Allergen Sensitization in a Murine Model of Asthma

    PubMed Central

    Chapman, Timothy J; Emo, Jason A; Knowlden, Sara A; Rezaee, Fariba; Georas, Steve N

    2013-01-01

    Recent published studies have highlighted the complexity of the immune response to allergens, and the various asthma phenotypes that arise as a result. While the interplay of regulatory and effector immune cells responding to allergen would seem to dictate the nature of the asthmatic response, little is known as to how tolerance versus reactivity to allergen occurs in the lung. The vast majority of mouse models study allergen encounter in naïve animals, and therefore exclude the possibility that previous encounters with allergen may influence future sensitization. To address this, we studied sensitization to the model allergen OVA in mice in the context of pre-existing tolerance to OVA. Allergen sensitization by either systemic administration of OVA with aluminum hydroxide or mucosal administration of OVA with low-dose lipopolysaccharide (LPS) was suppressed in tolerized animals. However, higher doses of LPS induced a mixed Th2 and Th17 response to OVA in both naïve and tolerized mice. Interestingly, tolerized mice had more pronounced Th17 type inflammation than naïve mice receiving the same sensitization, suggesting pre-existing tolerance altered the inflammatory phenotype. These data show that a pre-existing tolerogenic immune response to allergen can impact subsequent sensitization in the lung. These findings have potential significance in understanding late-onset disease in severe asthmatics. PMID:24038084

  14. The role of pre-existing disturbances in the effect of marine reserves on coastal ecosystems: a modelling approach.

    PubMed

    Savina, Marie; Condie, Scott A; Fulton, Elizabeth A

    2013-01-01

    We have used an end-to-end ecosystem model to explore responses over 30 years to coastal no-take reserves covering up to 6% of the fifty thousand square kilometres of continental shelf and slope off the coast of New South Wales (Australia). The model is based on the Atlantis framework, which includes a deterministic, spatially resolved three-dimensional biophysical model that tracks nutrient flows through key biological groups, as well as extraction by a range of fisheries. The model results support previous empirical studies in finding clear benefits of reserves to top predators such as sharks and rays throughout the region, while also showing how many of their major prey groups (including commercial species) experienced significant declines. It was found that the net impact of marine reserves was dependent on the pre-existing levels of disturbance (i.e. fishing pressure), and to a lesser extent on the size of the marine reserves. The high fishing scenario resulted in a strongly perturbed system, where the introduction of marine reserves had clear and mostly direct effects on biomass and functional biodiversity. However, under the lower fishing pressure scenario, the introduction of marine reserves caused both direct positive effects, mainly on shark groups, and indirect negative effects through trophic cascades. Our study illustrates the need to carefully align the design and implementation of marine reserves with policy and management objectives. Trade-offs may exist not only between fisheries and conservation objectives, but also among conservation objectives.

  15. The Role of Pre-Existing Disturbances in the Effect of Marine Reserves on Coastal Ecosystems: A Modelling Approach

    PubMed Central

    Savina, Marie; Condie, Scott A.; Fulton, Elizabeth A.

    2013-01-01

    We have used an end-to-end ecosystem model to explore responses over 30 years to coastal no-take reserves covering up to 6% of the fifty thousand square kilometres of continental shelf and slope off the coast of New South Wales (Australia). The model is based on the Atlantis framework, which includes a deterministic, spatially resolved three-dimensional biophysical model that tracks nutrient flows through key biological groups, as well as extraction by a range of fisheries. The model results support previous empirical studies in finding clear benefits of reserves to top predators such as sharks and rays throughout the region, while also showing how many of their major prey groups (including commercial species) experienced significant declines. It was found that the net impact of marine reserves was dependent on the pre-existing levels of disturbance (i.e. fishing pressure), and to a lesser extent on the size of the marine reserves. The high fishing scenario resulted in a strongly perturbed system, where the introduction of marine reserves had clear and mostly direct effects on biomass and functional biodiversity. However, under the lower fishing pressure scenario, the introduction of marine reserves caused both direct positive effects, mainly on shark groups, and indirect negative effects through trophic cascades. Our study illustrates the need to carefully align the design and implementation of marine reserves with policy and management objectives. Trade-offs may exist not only between fisheries and conservation objectives, but also among conservation objectives. PMID:23593432

  16. The quantitative assessment of domino effects caused by overpressure. Part I. Probit models.

    PubMed

    Cozzani, Valerio; Salzano, Ernesto

    2004-03-19

    Accidents caused by domino effect are among the more severe that took place in the chemical and process industry. However, a well established and widely accepted methodology for the quantitative assessment of domino accidents contribution to industrial risk is still missing. Hence, available data on damage to process equipment caused by blast waves were revised in the framework of quantitative risk analysis, aiming at the quantitative assessment of domino effects caused by overpressure. Specific probit models were derived for several categories of process equipment and were compared to other literature approaches for the prediction of probability of damage of equipment loaded by overpressure. The results evidence the importance of using equipment-specific models for the probability of damage and equipment-specific damage threshold values, rather than general equipment correlation, which may lead to errors up to 500%.

  17. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    NASA Astrophysics Data System (ADS)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2016-07-01

    coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.

  18. From Tls Point Clouds to 3d Models of Trees: a Comparison of Existing Algorithms for 3d Tree Reconstruction

    NASA Astrophysics Data System (ADS)

    Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.

    2017-02-01

    3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS) data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  19. Cholera Modeling: Challenges to Quantitative Analysis and Predicting the Impact of Interventions

    PubMed Central

    Grad, Yonatan H.; Miller, Joel C.; Lipsitch, Marc

    2012-01-01

    Several mathematical models of epidemic cholera have recently been proposed in response to outbreaks in Zimbabwe and Haiti. These models aim to estimate the dynamics of cholera transmission and the impact of possible interventions, with a goal of providing guidance to policy-makers in deciding among alternative courses of action, including vaccination, provision of clean water, and antibiotics. Here we discuss concerns about model misspecification, parameter uncertainty, and spatial heterogeneity intrinsic to models for cholera. We argue for caution in interpreting quantitative predictions, particularly predictions of the effectiveness of interventions. We specify sensitivity analyses that would be necessary to improve confidence in model-based quantitative prediction, and suggest types of monitoring in future epidemic settings that would improve analysis and prediction. PMID:22659546

  20. Principles of microRNA Regulation Revealed Through Modeling microRNA Expression Quantitative Trait Loci.

    PubMed

    Budach, Stefan; Heinig, Matthias; Marsico, Annalisa

    2016-08-01

    Extensive work has been dedicated to study mechanisms of microRNA-mediated gene regulation. However, the transcriptional regulation of microRNAs themselves is far less well understood, due to difficulties determining the transcription start sites of transient primary transcripts. This challenge can be addressed using expression quantitative trait loci (eQTLs) whose regulatory effects represent a natural source of perturbation of cis-regulatory elements. Here we used previously published cis-microRNA-eQTL data for the human GM12878 cell line, promoter predictions, and other functional annotations to determine the relationship between functional elements and microRNA regulation. We built a logistic regression model that classifies microRNA/SNP pairs into eQTLs or non-eQTLs with 85% accuracy; shows microRNA-eQTL enrichment for microRNA precursors, promoters, enhancers, and transcription factor binding sites; and depletion for repressed chromatin. Interestingly, although there is a large overlap between microRNA eQTLs and messenger RNA eQTLs of host genes, 74% of these shared eQTLs affect microRNA and host expression independently. Considering microRNA-only eQTLs we find a significant enrichment for intronic promoters, validating the existence of alternative promoters for intragenic microRNAs. Finally, in line with the GM12878 cell line derived from B cells, we find genome-wide association (GWA) variants associated to blood-related traits more likely to be microRNA eQTLs than random GWA and non-GWA variants, aiding the interpretation of GWA results. Copyright © 2016 by the Genetics Society of America.

  1. Principles of microRNA Regulation Revealed Through Modeling microRNA Expression Quantitative Trait Loci

    PubMed Central

    Budach, Stefan; Heinig, Matthias; Marsico, Annalisa

    2016-01-01

    Extensive work has been dedicated to study mechanisms of microRNA-mediated gene regulation. However, the transcriptional regulation of microRNAs themselves is far less well understood, due to difficulties determining the transcription start sites of transient primary transcripts. This challenge can be addressed using expression quantitative trait loci (eQTLs) whose regulatory effects represent a natural source of perturbation of cis-regulatory elements. Here we used previously published cis-microRNA-eQTL data for the human GM12878 cell line, promoter predictions, and other functional annotations to determine the relationship between functional elements and microRNA regulation. We built a logistic regression model that classifies microRNA/SNP pairs into eQTLs or non-eQTLs with 85% accuracy; shows microRNA-eQTL enrichment for microRNA precursors, promoters, enhancers, and transcription factor binding sites; and depletion for repressed chromatin. Interestingly, although there is a large overlap between microRNA eQTLs and messenger RNA eQTLs of host genes, 74% of these shared eQTLs affect microRNA and host expression independently. Considering microRNA-only eQTLs we find a significant enrichment for intronic promoters, validating the existence of alternative promoters for intragenic microRNAs. Finally, in line with the GM12878 cell line derived from B cells, we find genome-wide association (GWA) variants associated to blood-related traits more likely to be microRNA eQTLs than random GWA and non-GWA variants, aiding the interpretation of GWA results. PMID:27260304

  2. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    NASA Astrophysics Data System (ADS)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  3. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  4. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  5. Framework for a Quantitative Systemic Toxicity Model (FutureToxII)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  6. Quantitative Model of Systemic Toxicity Using ToxCast and ToxRefDB (SOT)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  7. Framework for a Quantitative Systemic Toxicity Model (FutureToxII)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  8. Quantitative Model of Systemic Toxicity Using ToxCast and ToxRefDB (SOT)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  9. Bayesian methods for quantitative trait loci mapping based on model selection: approximate analysis using the Bayesian information criterion.

    PubMed

    Ball, R D

    2001-11-01

    We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data.

  10. Bias Reduction in Estimating Variance Components of Phytoplankton Existence at Na Thap River Based on Logistics Linear Mixed Models

    NASA Astrophysics Data System (ADS)

    Arisanti, R.; Notodiputro, K. A.; Sadik, K.; Lim, A.

    2017-03-01

    There are two approaches in estimating variance components, i.e. linearity and integral approaches. However the estimates of variance components produced by both methods are known to be biased. Firth (1993) has introduced parameter estimation for correcting the bias of the maximum likelihood estimates. This method is within the class of linear models, especially the Restricted Maximum Likelihood (REML) method, and the resulting estimator is known as the Firth estimator. In this paper we discuss the bias correction method applied to a logistic linear mixed model in analyzing the existence of Synedra phytoplankton along Na Thap river in Thailand. The Firth adjusted Maximum Likelihood Estimation (MLE) is similar to REML but it shows the characteristic of generalized linear mixed model. We evaluated the Firth adjustment method by means of simulations and the result showed that the unadjusted MLE produced 95% confidence intervals which were narrower when compare to the Firth method. However, the probability coverage of the interval for unadjusted MLE was lower than 95%, whereas for the Firth method the probability coverage is approximately 95%. These results were also consistent with the variance estimation of the Synedra phytoplankton existence. It was shown that the variance estimates of Firth adjusted MLE was lower than the unadjusted MLE.

  11. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    PubMed Central

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  12. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    NASA Astrophysics Data System (ADS)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2017-03-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  13. Existence and asymptotics of traveling wave fronts for a delayed nonlocal diffusion model with a quiescent stage

    NASA Astrophysics Data System (ADS)

    Zhou, Kai; Lin, Yuan; Wang, Qi-Ru

    2013-11-01

    In this paper, we propose a delayed nonlocal diffusion model with a quiescent stage and study its dynamics. By using Schauder's fixed point theorem and upper-lower solution method, we establish the existence of traveling wave fronts for speed c⩾c∗(τ), where c∗(τ) is a critical value. With the method of Carr and Chmaj (PAMS, 2004), we discuss the asymptotic behavior of traveling wave fronts and then get the nonexistence of traveling wave fronts for c

  14. Existence, multiplicity and stability of endemic states for an age-structured S-I epidemic model.

    PubMed

    Breda, D; Visetti, D

    2012-01-01

    We study an S-I type epidemic model in an age-structured population, with mortality due to the disease. A threshold quantity is found that controls the stability of the disease-free equilibrium and guarantees the existence of an endemic equilibrium. We obtain conditions on the age-dependence of the susceptibility to infection that imply the uniqueness of the endemic equilibrium. An example with two endemic equilibria is shown. Finally, we analyse numerically how the stability of the endemic equilibrium is affected by the extra-mortality and by the possible periodicities induced by the demographic age-structure.

  15. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling.

    PubMed

    Dick, Daniel G; Maxwell, Erin E

    2015-07-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'.

  16. A Davis-Putnam program and its application to finite-order model search: Quasigroup existence problems

    SciTech Connect

    McCune, W.

    1994-09-01

    This document describes the implementation and use of a Davis-Putnam procedure for the propositional satisfiability problem. It also describes code that takes statements in first-order logic with equality and a domain size n then searches for models of size n. The first-order model-searching code transforms the statements into set of propositional clauses such that the first-order statements have a model of size n if and only if the propositional clauses are satisfiable. The propositional set is then given to the Davis-Putnam code; any propositional models that are found can be translated to models of the first-order statements. The first-order model-searching program accepts statements only in a flattened relational clause form without function symbols. Additional code was written to take input statements in the language of OTTER 3.0 and produce the flattened relational form. The program was successfully applied to several open questions on the existence of orthogonal quasigroups.

  17. Models and methods for quantitative analysis of surface-enhanced Raman spectra.

    PubMed

    Li, Shuo; Nyagilo, James O; Dave, Digant P; Gao, Jean

    2014-03-01

    The quantitative analysis of surface-enhanced Raman spectra using scattering nanoparticles has shown the potential and promising applications in in vivo molecular imaging. The diverse approaches have been used for quantitative analysis of Raman pectra information, which can be categorized as direct classical least squares models, full spectrum multivariate calibration models, selected multivariate calibration models, and latent variable regression (LVR) models. However, the working principle of these methods in the Raman spectra application remains poorly understood and a clear picture of the overall performance of each model is missing. Based on the characteristics of the Raman spectra, in this paper, we first provide the theoretical foundation of the aforementioned commonly used models and show why the LVR models are more suitable for quantitative analysis of the Raman spectra. Then, we demonstrate the fundamental connections and differences between different LVR methods, such as principal component regression, reduced-rank regression, partial least square regression (PLSR), canonical correlation regression, and robust canonical analysis, by comparing their objective functions and constraints.We further prove that PLSR is literally a blend of multivariate calibration and feature extraction model that relates concentrations of nanotags to spectrum intensity. These features (a.k.a. latent variables) satisfy two purposes: the best representation of the predictor matrix and correlation with the response matrix. These illustrations give a new understanding of the traditional PLSR and explain why PLSR exceeds other methods in quantitative analysis of the Raman spectra problem. In the end, all the methods are tested on the Raman spectra datasets with different evaluation criteria to evaluate their performance.

  18. Human judgment vs. quantitative models for the management of ecological resources.

    PubMed

    Holden, Matthew H; Ellner, Stephen P

    2016-07-01

    Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed

  19. Quantitative Genetics and Functional–Structural Plant Growth Models: Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization

    PubMed Central

    Letort, Véronique; Mahe, Paul; Cournède, Paul-Henry; de Reffye, Philippe; Courtois, Brigitte

    2008-01-01

    Background and Aims Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype × environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional–structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods The GREENLAB model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings of the species-specific parameters of the model. The QTL Cartographer software was used to study QTL detection of simulated plant traits. A genetic algorithm was implemented to define the ideotype for yield maximization based on the model parameters and the associated allelic combination. Key Results and Conclusions By keeping the environmental factors constant and using a virtual population with a large number of individuals generated by a Mendelian genetic model, results for an ideal case could be simulated. Virtual QTL detection was compared in the case of phenotypic traits – such as cob weight – and when traits were model parameters, and was found to be more accurate in the latter case. The practical interest of this approach is illustrated by calculating the parameters (and the corresponding genotype) associated with yield optimization of a GREENLAB maize model. The paper discusses the potentials of GREENLAB to represent environment × genotype

  20. Modelling CEC variations versus structural iron reduction levels in dioctahedral smectites. Existing approaches, new data and model refinements.

    PubMed

    Hadi, Jebril; Tournassat, Christophe; Ignatiadis, Ioannis; Greneche, Jean Marc; Charlet, Laurent

    2013-10-01

    A model was developed to describe how the 2:1 layer excess negative charge induced by the reduction of Fe(III) to Fe(II) by sodium dithionite buffered with citrate-bicarbonate is balanced and applied to nontronites. This model is based on new experimental data and extends structural interpretation introduced by a former model [36-38]. The 2:1 layer negative charge increase due to Fe(III) to Fe(II) reduction is balanced by an excess adsorption of cations in the clay interlayers and a specific sorption of H(+) from solution. Prevalence of one compensating mechanism over the other is related to the growing lattice distortion induced by structural Fe(III) reduction. At low reduction levels, cation adsorption dominates and some of the incorporated protons react with structural OH groups, leading to a dehydroxylation of the structure. Starting from a moderate reduction level, other structural changes occur, leading to a reorganisation of the octahedral and tetrahedral lattice: migration or release of cations, intense dehydroxylation and bonding of protons to undersaturated oxygen atoms. Experimental data highlight some particular properties of ferruginous smectites regarding chemical reduction. Contrary to previous assumptions, the negative layer charge of nontronites does not only increase towards a plateau value upon reduction. A peak is observed in the reduction domain. After this peak, the negative layer charge decreases upon extended reduction (>30%). The decrease is so dramatic that the layer charge of highly reduced nontronites can fall below that of its fully oxidised counterpart. Furthermore, the presence of a large amount of tetrahedral Fe seems to promote intense clay structural changes and Fe reducibility. Our newly acquired data clearly show that models currently available in the literature cannot be applied to the whole reduction range of clay structural Fe. Moreover, changes in the model normalising procedure clearly demonstrate that the investigated low

  1. Evaluation of the existing triple point path models with new experimental data: proposal of an original empirical formulation

    NASA Astrophysics Data System (ADS)

    Boutillier, J.; Ehrhardt, L.; De Mezzo, S.; Deck, C.; Magnan, P.; Naz, P.; Willinger, R.

    2017-08-01

    With the increasing use of improvised explosive devices (IEDs), the need for better mitigation, either for building integrity or for personal security, increases in importance. Before focusing on the interaction of the shock wave with a target and the potential associated damage, knowledge must be acquired regarding the nature of the blast threat, i.e., the pressure-time history. This requirement motivates gaining further insight into the triple point (TP) path, in order to know precisely which regime the target will encounter (simple reflection or Mach reflection). Within this context, the purpose of this study is to evaluate three existing TP path empirical models, which in turn are used in other empirical models for the determination of the pressure profile. These three TP models are the empirical function of Kinney, the Unified Facilities Criteria (UFC) curves, and the model of the Natural Resources Defense Council (NRDC). As discrepancies are observed between these models, new experimental data were obtained to test their reliability and a new promising formulation is proposed for scaled heights of burst ranging from 24.6-172.9 cm/kg^{1/3}.

  2. Can we better use existing and emerging computing hardware to embed activity coefficient predictions in complex atmospheric aerosol models?

    NASA Astrophysics Data System (ADS)

    Topping, David; Alibay, Irfan; Ruske, Simon; Hindriksen, Vincent; Noisternig, Michael

    2016-04-01

    To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we show that comprehensive microphysical models from single particle to larger scales can be developed to encompass a complete state-of-the-art knowledge of aerosol chemical and process diversity. We focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method, profiling traditional coding strategies and those that exploit emerging hardware.

  3. A Computational Study of Cavitation Model Validity Using a New Quantitative Criterion

    NASA Astrophysics Data System (ADS)

    Hagar Alm, El-Din; Zhang, Yu-Sheng; Medhat, Elkelawy

    2012-06-01

    The predictive capability of two different numerical cavitation models accounting for the onset and development of cavitation inside real-sized diesel nozzle holes is assessed on the basis of the referenced experimental data. The calculations performed indicate that for the same model assumptions, numerical implementation, discretization scheme, and turbulence grid resolution model, the predictions for differently applied physical cavitation submodels are phenomenologically distinct from each other. We present a comparison by applying a new criterion for the quantitative comparison between the results obtained from both cavitation models.

  4. Quantitative retinal blood flow mapping from fluorescein videoangiography using tracer kinetic modeling.

    PubMed

    Tichauer, Kenneth M; Guthrie, Micah; Hones, Logan; Sinha, Lagnojita; St Lawrence, Keith; Kang-Mieler, Jennifer J

    2015-05-15

    Abnormal retinal blood flow (RBF) has been associated with numerous retinal pathologies, yet existing methods for measuring RBF predominantly provide only relative measures of blood flow and are unable to quantify volumetric blood flow, which could allow direct patient to patient comparison. This work presents a methodology based on linear systems theory and an image-based arterial input function to quantitatively map volumetric blood flow from standard fluorescein videoangiography data, and is therefore directly translatable to the clinic. Application of the approach to fluorescein retinal videoangiography in rats (4 control, 4 diabetic) demonstrated significantly higher RBF in 4-5 week diabetic rats as expected from the literature.

  5. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling

    PubMed Central

    Dick, Daniel G.; Maxwell, Erin E.

    2015-01-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the ‘migration model’. PMID:26156130

  6. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    SciTech Connect

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  7. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  8. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  9. Integrating precipitation datasets from global climate models into integrated groundwater-surface water models: A pilot study using existing data and open source models

    NASA Astrophysics Data System (ADS)

    Christian-Smith, J.; Singh, A.; Suribhatla, R. M.

    2016-12-01

    Climate change is one of the factors influencing water supply, quality, and availability, yet climate science is often not incorporated into the models commonly used for groundwater management. This disconnect between climate science and groundwater planning threatens to the long-term sustainability of California's water supply as climate change affects our hydrological system. In addition, the recently approved Groundwater Sustainability Plan regulations require that water budgets utilize projections to account for changes in population, climate change and sea level rise (Section 354.18 (d)(3)). This pilot study combines existing data from two open source models in order to simulate projected climate change impacts on groundwater resources in California's Central Valley. The study combines downscaled precipitation data from the U.S. Geological Survey's Basin Characterization Model (BCM) with the California Department of Water Resources' California Central Valley Groundwater-Surface Water Simulation Model (C2VSIM), which simulates water movement through the linked land surface, groundwater and surface water flow systems. Current, publicly available versions of C2VSIM are based on historical data and run through September 2009. This study extends this analysis through the end of the century, using the downscaled climate information from the BCM. The study concludes with a series of recommendations for how to improve the way that C2VSIM integrates climate data and a series of lessons learned for groundwater managers about the potential impacts of climate change on groundwater availability.

  10. Evolution of a fold-thrust belt deforming a unit with pre-existing linear asperities: Insights from analog models

    NASA Astrophysics Data System (ADS)

    Burberry, Caroline M.; Swiatlowski, Jerlyn L.

    2016-06-01

    Heterogeneity, whether geometric or rheologic, in crustal material undergoing compression affects the geometry of the structures produced. This study documents the thrust fault geometries produced when discrete linear asperities are introduced into an analog model, scaled to represent bulk upper crustal properties, and compressed. Varying obliquities of the asperities are used, relative to the imposed compression, and the resultant development of thrust fault traces and branch lines in map view is tracked. Once the model runs are completed, cross-sections are created and analyzed. The models show that asperities confined to the base layer promote the clustering of branch lines in the surface thrusts. Strong clustering in branch lines is also noted where several asperities are in close proximity or cross. Slight reverse-sense reactivation of asperities cut through the sedimentary sequence is noted in cross-section, where the asperity and the subsequent thrust belt interact. The model results are comparable to the situation in the Dinaric Alps, where pre-existing faults to the SW of the NE Adriatic Fault Zone contribute to the clustering of branch lines developed in the surface fold-thrust belt. These results can therefore be used to evaluate the evolution of other basement-involved fold-thrust belts worldwide.

  11. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    NASA Astrophysics Data System (ADS)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  12. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    PubMed Central

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies. PMID:28112212

  13. Quantitative 3D investigation of Neuronal network in mouse spinal cord model.

    PubMed

    Bukreeva, I; Campi, G; Fratini, M; Spanò, R; Bucci, D; Battaglia, G; Giove, F; Bravin, A; Uccelli, A; Venturi, C; Mastrogiacomo, M; Cedola, A

    2017-01-23

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a "database" for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  14. Mathematical model of the Tat-Rev regulation of HIV-1 replication in an activated cell predicts the existence of oscillatory dynamics in the synthesis of viral components

    PubMed Central

    2014-01-01

    analyzed alternative hypotheses for the re-cycling of the Rev proteins both in the cytoplasm and the nuclear pore complex. Conclusions The quantitative mathematical model of the Tat-Rev regulation of HIV-1 replication predicts the existence of oscillatory dynamics which depends on the efficacy of the Tat and TAR interaction as well as on the Rev-mediated transport processes. The biological relevance of the oscillatory regimes for the HIV-1 life cycle is discussed. PMID:25564443

  15. Experimental model considerations for the study of protein-energy malnutrition co-existing with ischemic brain injury.

    PubMed

    Prosser-Loose, Erin J; Smith, Shari E; Paterson, Phyllis G

    2011-05-01

    Protein-energy malnutrition (PEM) affects ~16% of patients at admission for stroke. We previously modeled this in a gerbil global cerebral ischemia model and found that PEM impairs functional outcome and influences mechanisms of ischemic brain injury and recovery. Since this model is no longer reliable, we investigated the utility of the rat 2-vessel occlusion (2-VO) with hypotension model of global ischemia for further study of this clinical problem. Male, Sprague-Dawley rats were exposed to either control diet (18% protein) or PEM induced by feeding a low protein diet (2% protein) for 7d prior to either global ischemia or sham surgery. PEM did not significantly alter the hippocampal CA1 neuron death (p = 0.195 by 2-factor ANOVA) or the increase in dendritic injury caused by exposure to global ischemia. Unexpectedly, however, a strong trend was evident for PEM to decrease the consistency of hippocampal damage, as shown by an increased incidence of unilateral or no hippocampal damage (p=0.069 by chi-square analysis). Although PEM caused significant changes to baseline arterial blood pH, pO(2), pCO(2), and fasting glucose (p<0.05), none of these variables (nor hematocrit) correlated significantly with CA1 cell counts in the malnourished group exposed to 2-VO (p>0.269). Intra-ischemic tympanic temperature and blood pressure were strictly and equally controlled between ischemic groups. We conclude that co-existing PEM confounded the consistency of hippocampal injury in the 2-VO model. Although the mechanisms responsible were not identified, this model of brain ischemia should not be used for studying this co-morbidity factor. © 2011 Bentham Science Publishers Ltd.

  16. Quantitative evaluation by measurement and modeling of the variations in dose distributions deposited in mobile targets.

    PubMed

    Ali, Imad; Alsbou, Nesreen; Taguenang, Jean-Michel; Ahmad, Salahuddin

    2017-03-03

    The objective of this study is to quantitatively evaluate variations of dose distributions deposited in mobile target by measurement and modeling. The effects of variation in dose distribution induced by motion on tumor dose coverage and sparing of normal tissues were investigated quantitatively. The dose distributions with motion artifacts were modeled considering different motion patterns that include (a) motion with constant speed and (b) sinusoidal motion. The model predictions of the dose distributions with motion artifacts were verified with measurement where the dose distributions from various plans that included three-dimensional conformal and intensity-modulated fields were measured with a multiple-diode-array detector (MapCheck2), which was mounted on a mobile platform that moves with adjustable motion parameters. For each plan, the dose distributions were then measured with MapCHECK2 using different motion amplitudes from 0-25 mm. In addition, mathematical modeling was developed to predict the variations in the dose distributions and their dependence on the motion parameters that included amplitude, frequency and phase for sinusoidal motions. The dose distributions varied with motion and depended on the motion pattern particularly the sinusoidal motion, which spread out along the direction of motion. Study results showed that in the dose region between isocenter and the 50% isodose line, the dose profile decreased with increase of the motion amplitude. As the range of motion became larger than the field length along the direction of motion, the dose profiles changes overall including the central axis dose and 50% isodose line. If the total dose was delivered over a time much longer than the periodic time of motion, variations in motion frequency and phase do not affect the dose profiles. As a result, the motion dose modeling developed in this study provided quantitative characterization of variation in the dose distributions induced by motion, which

  17. Comparison of quantitative structure-activity relationship model performances on carboquinone derivatives.

    PubMed

    Bolboacă, Sorana-Daniela; Jäntschi, Lorentz

    2009-10-14

    Quantitative structure-activity relationship (qSAR) models are used to understand how the structure and activity of chemical compounds relate. In the present study, 37 carboquinone derivatives were evaluated and two different qSAR models were developed using members of the Molecular Descriptors Family (MDF) and the Molecular Descriptors Family on Vertices (MDFV). The usual parameters of regression models and the following estimators were defined and calculated in order to analyze the validity and to compare the models: Akaike's information criteria (three parameters), Schwarz (or Bayesian) information criterion, Amemiya prediction criterion, Hannan-Quinn criterion, Kubinyi function, Steiger's Z test, and Akaike's weights. The MDF and MDFV models proved to have the same estimation ability of the goodness-of-fit according to Steiger's Z test. The MDFV model proved to be the best model for the considered carboquinone derivatives according to the defined information and prediction criteria, Kubinyi function, and Akaike's weights.

  18. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    PubMed

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  19. Quantitative Structure‐activity Relationship (QSAR) Models for Docking Score Correction

    PubMed Central

    Yamasaki, Satoshi; Yasumatsu, Isao; Takeuchi, Koh; Kurosawa, Takashi; Nakamura, Haruki

    2016-01-01

    Abstract In order to improve docking score correction, we developed several structure‐based quantitative structure activity relationship (QSAR) models by protein‐drug docking simulations and applied these models to public affinity data. The prediction models used descriptor‐based regression, and the compound descriptor was a set of docking scores against multiple (∼600) proteins including nontargets. The binding free energy that corresponded to the docking score was approximated by a weighted average of docking scores for multiple proteins, and we tried linear, weighted linear and polynomial regression models considering the compound similarities. In addition, we tried a combination of these regression models for individual data sets such as IC50, Ki, and %inhibition values. The cross‐validation results showed that the weighted linear model was more accurate than the simple linear regression model. Thus, the QSAR approaches based on the affinity data of public databases should improve docking scores. PMID:28001004

  20. Experimentally validated quantitative linear model for the device physics of elastomeric microfluidic valves

    NASA Astrophysics Data System (ADS)

    Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French

    2007-03-01

    A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.

  1. Quantitative explanation of circuit experiments and real traffic using the optimal velocity model

    NASA Astrophysics Data System (ADS)

    Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi

    2016-04-01

    We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.

  2. Quantitative DFT modeling of the enantiomeric excess for dioxirane-catalyzed epoxidations

    PubMed Central

    Schneebeli, Severin T.; Hall, Michelle Lynn

    2009-01-01

    Herein we report the first fully quantum mechanical study of enantioselectivity for a large dataset. We show that transition state modeling at the UB3LYP-DFT/6-31G* level of theory can accurately model enantioselectivity for various dioxirane-catalyzed asymmetric epoxidations. All the synthetically useful high selectivities are successfully “predicted” by this method. Our results hint at the utility of this method to further model other asymmetric reactions and facilitate the discovery process for the experimental organic chemist. Our work suggests the possibility of using computational methods not simply to explain organic phenomena, but also to predict them quantitatively. PMID:19243187

  3. Modelling Activities In Kinematics Understanding quantitative relations with the contribution of qualitative reasoning

    NASA Astrophysics Data System (ADS)

    Orfanos, Stelios

    2010-01-01

    In Greek traditional teaching a lot of significant concepts are introduced with a sequence that does not provide the students with all the necessary information required to comprehend. We consider that understanding concepts and the relations among them is greatly facilitated by the use of modelling tools, taking into account that the modelling process forces students to change their vague, imprecise ideas into explicit causal relationships. It is not uncommon to find students who are able to solve problems by using complicated relations without getting a qualitative and in-depth grip on them. Researchers have already shown that students often have a formal mathematical and physical knowledge without a qualitative understanding of basic concepts and relations." The aim of this communication is to present some of the results of our investigation into modelling activities related to kinematical concepts. For this purpose, we have used ModellingSpace, an environment that was especially designed to allow students from eleven to seventeen years old to express their ideas and gradually develop them. The ModellingSpace enables students to build their own models and offers the choice of observing directly simulations of real objects and/or all the other alternative forms of representations (tables of values, graphic representations and bar-charts). The students -in order to answer the questions- formulate hypotheses, they create models, they compare their hypotheses with the representations of their models and they modify or create other models when their hypotheses did not agree with the representations. In traditional ways of teaching, students are educated to utilize formulas as the most important strategy. Several times the students recall formulas in order to utilize them, without getting an in-depth understanding on them. Students commonly use the quantitative type of reasoning, since it is primarily used in teaching, although it may not be fully understood by them

  4. Curcumin labels amyloid pathology in vivo, disrupts existing plaques, and partially restores distorted neurites in an Alzheimer mouse model.

    PubMed

    Garcia-Alloza, M; Borrelli, L A; Rozkalne, A; Hyman, B T; Bacskai, B J

    2007-08-01

    Alzheimer's disease (AD) is characterized by senile plaques and neurodegeneration although the neurotoxic mechanisms have not been completely elucidated. It is clear that both oxidative stress and inflammation play an important role in the illness. The compound curcumin, with a broad spectrum of anti-oxidant, anti-inflammatory, and anti-fibrilogenic activities may represent a promising approach for preventing or treating AD. Curcumin is a small fluorescent compound that binds to amyloid deposits. In the present work we used in vivo multiphoton microscopy (MPM) to demonstrate that curcumin crosses the blood-brain barrier and labels senile plaques and cerebrovascular amyloid angiopathy (CAA) in APPswe/PS1dE9 mice. Moreover, systemic treatment of mice with curcumin for 7 days clears and reduces existing plaques, as monitored with longitudinal imaging, suggesting a potent disaggregation effect. Curcumin also led to a limited, but significant reversal of structural changes in dystrophic dendrites, including abnormal curvature and dystrophy size. Together, these data suggest that curcumin reverses existing amyloid pathology and associated neurotoxicity in a mouse model of AD. This approach could lead to more effective clinical therapies for the prevention of oxidative stress, inflammation and neurotoxicity associated with AD.

  5. Incorporation of caffeine into a quantitative model of fatigue and sleep.

    PubMed

    Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A

    2011-03-21

    A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation.

  6. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    PubMed

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  7. Evaluation of the Use of Existing RELAP5-3D Models to Represent the Actinide Burner Test Reactor

    SciTech Connect

    C. B. Davis

    2007-02-01

    The RELAP5-3D code is being considered as a thermal-hydraulic system code to support the development of the sodium-cooled Actinide Burner Test Reactor as part of Global Nuclear Energy Partnership. An evaluation was performed to determine whether the control system could be used to simulate the effects of non-convective mechanisms of heat transport in the fluid that are not currently represented with internal code models, including axial and radial heat conduction in the fluid and subchannel mixing. The evaluation also determined the relative importance of axial and radial heat conduction and fluid mixing on peak cladding temperature for a wide range of steady conditions and during a representative loss-of-flow transient. The evaluation was performed using a RELAP5-3D model of a subassembly in the Experimental Breeder Reactor-II, which was used as a surrogate for the Actinide Burner Test Reactor. An evaluation was also performed to determine if the existing centrifugal pump model could be used to simulate the performance of electromagnetic pumps.

  8. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    PubMed

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  9. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model

    NASA Astrophysics Data System (ADS)

    Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa

    We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.

  10. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    PubMed

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  11. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    PubMed Central

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  12. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    NASA Astrophysics Data System (ADS)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  13. A threshold of mechanical strain intensity for the direct activation of osteoblast function exists in a murine maxilla loading model.

    PubMed

    Suzuki, Natsuki; Aoki, Kazuhiro; Marcián, Petr; Borák, Libor; Wakabayashi, Noriyuki

    2016-10-01

    The response to the mechanical loading of bone tissue has been extensively investigated; however, precisely how much strain intensity is necessary to promote bone formation remains unclear. Combination studies utilizing histomorphometric and numerical analyses were performed using the established murine maxilla loading model to clarify the threshold of mechanical strain needed to accelerate bone formation activity. For 7 days, 191 kPa loading stimulation for 30 min/day was applied to C57BL/6J mice. Two regions of interest, the AWAY region (away from the loading site) and the NEAR region (near the loading site), were determined. The inflammatory score increased in the NEAR region, but not in the AWAY region. A strain intensity map obtained from [Formula: see text] images was superimposed onto the images of the bone formation inhibitor, sclerostin-positive cell localization. The number of sclerostin-positive cells significantly decreased after mechanical loading of more than [Formula: see text] in the AWAY region, but not in the NEAR region. The mineral apposition rate, which shows the bone formation ability of osteoblasts, was accelerated at the site of surface strain intensity, namely around [Formula: see text], but not at the site of lower surface strain intensity, which was around [Formula: see text] in the AWAY region, thus suggesting the existence of a strain intensity threshold for promoting bone formation. Taken together, our data suggest that a threshold of mechanical strain intensity for the direct activation of osteoblast function and the reduction of sclerostin exists in a murine maxilla loading model in the non-inflammatory region.

  14. Pneumococcal meningitis threshold model: a potential tool to assess infectious risk of new or existing inner ear surgical interventions

    PubMed Central

    Wei, Benjamin P.C.; Shepherd, Robert K.; Robins-Browne, Roy M.; Clark, Graeme M.; O'Leary, Stephen J.

    2007-01-01

    Hypothesis A minimal threshold of S. pneumoniae is required to induce meningitis in healthy animals for intraperitoneal (hematogenous), middle ear and inner ear inoculations and this threshold may be altered by recent inner ear surgery. Background There has been an increase in the number of reported cases of cochlear implant-related pneumococcal meningitis since 2002. The pathogenesis of pneumococcal meningitis is complex and not completely understood. The bacteria can reach the central nervous system (CNS) from the upper respiratory tract mucosa via either hematogenous route or via the inner ear. The establishment of a threshold model for all potential routes of infection to the CNS in animals without cochlear implantation is an important first step to help us understand the pathogenesis of the disease in animals with cochlear implantation. Methods 54 otologically normal, adult Hooded Wistar rats (27 receiving cochleostomy and 27 controls) were inoculated with different amounts of bacterial counts via three different routes (intraperitoneal, middle ear and inner ear). Rats were monitored over 5 days for signs of meningitis. Blood, CSF and middle ear swabs were taken for bacterial culture and brains and cochleae were examined for signs of infection. Results The threshold of bacterial counts required to induce meningitis is lowest in rats receiving direct inner ear inoculation compared to both intraperitoneal and middle ear inoculation. There is no change in threshold between the group of rats with cochleostomy and the control (Fisher exact test; p < 0.05). Conclusion A minimal threshold of bacteria is required to induce meningitis in healthy animals and is different for three different routes of infection (intraperitoneal, middle ear and inner ear). Cochleostomy performed 4 weeks prior to the inoculation did not reduce the threshold of bacteria required for meningitis in all three infectious routes. This threshold model will also serve as a valuable tool, assisting

  15. A Quantitative Quasispecies Theory-Based Model of Virus Escape Mutation Under Immune Selection

    DTIC Science & Technology

    2012-01-01

    A quantitative quasispecies theory-based model of virus escape mutation under immune selection Hyung-June Woo and Jaques Reifman1 Biotechnology High...Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although...response. The virus quasispecies dynamics are explicitly repre- sented by mutations in the combined sequence space of a set of epitopes within the viral

  16. A quantitative model of human DNA base excision repair. I. Mechanistic insights.

    PubMed

    Sokhansanj, Bahrad A; Rodrigue, Garry R; Fitch, J Patrick; Wilson, David M

    2002-04-15

    Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic i