Science.gov

Sample records for existing models quantitatively

  1. A Primer on Quantitative Modeling.

    PubMed

    Neagu, Iulia; Levine, Erel

    2015-01-01

    Caenorhabditis elegans is particularly suitable for obtaining quantitative data about behavior, neuronal activity, gene expression, ecological interactions, quantitative traits, and much more. To exploit the full potential of these data one seeks to interpret them within quantitative models. Using two examples from the C. elegans literature we briefly explore several types of modeling approaches relevant to worm biology, and show how they might be used to interpret data, formulate testable hypotheses, and suggest new experiments. We emphasize that the choice of modeling approach is strongly dependent on the questions of interest and the type of available knowledge.

  2. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  3. Modeling Truth Existence in Truth Discovery.

    PubMed

    Zhi, Shi; Zhao, Bo; Tong, Wenzhu; Gao, Jing; Yu, Dian; Ji, Heng; Han, Jiawei

    2015-08-01

    When integrating information from multiple sources, it is common to encounter conflicting answers to the same question. Truth discovery is to infer the most accurate and complete integrated answers from conflicting sources. In some cases, there exist questions for which the true answers are excluded from the candidate answers provided by all sources. Without any prior knowledge, these questions, named no-truth questions, are difficult to be distinguished from the questions that have true answers, named has-truth questions. In particular, these no-truth questions degrade the precision of the answer integration system. We address such a challenge by introducing source quality, which is made up of three fine-grained measures: silent rate, false spoken rate and true spoken rate. By incorporating these three measures, we propose a probabilistic graphical model, which simultaneously infers truth as well as source quality without any a priori training involving ground truth answers. Moreover, since inferring this graphical model requires parameter tuning of the prior of truth, we propose an initialization scheme based upon a quantity named truth existence score, which synthesizes two indicators, namely, participation rate and consistency rate. Compared with existing methods, our method can effectively filter out no-truth questions, which results in more accurate source quality estimation. Consequently, our method provides more accurate and complete answers to both has-truth and no-truth questions. Experiments on three real-world datasets illustrate the notable advantage of our method over existing state-of-the-art truth discovery methods.

  4. Quantitative reactive modeling and verification.

    PubMed

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  5. Modeling Truth Existence in Truth Discovery

    PubMed Central

    Zhi, Shi; Zhao, Bo; Tong, Wenzhu; Gao, Jing; Yu, Dian; Ji, Heng; Han, Jiawei

    2015-01-01

    When integrating information from multiple sources, it is common to encounter conflicting answers to the same question. Truth discovery is to infer the most accurate and complete integrated answers from conflicting sources. In some cases, there exist questions for which the true answers are excluded from the candidate answers provided by all sources. Without any prior knowledge, these questions, named no-truth questions, are difficult to be distinguished from the questions that have true answers, named has-truth questions. In particular, these no-truth questions degrade the precision of the answer integration system. We address such a challenge by introducing source quality, which is made up of three fine-grained measures: silent rate, false spoken rate and true spoken rate. By incorporating these three measures, we propose a probabilistic graphical model, which simultaneously infers truth as well as source quality without any a priori training involving ground truth answers. Moreover, since inferring this graphical model requires parameter tuning of the prior of truth, we propose an initialization scheme based upon a quantity named truth existence score, which synthesizes two indicators, namely, participation rate and consistency rate. Compared with existing methods, our method can effectively filter out no-truth questions, which results in more accurate source quality estimation. Consequently, our method provides more accurate and complete answers to both has-truth and no-truth questions. Experiments on three real-world datasets illustrate the notable advantage of our method over existing state-of-the-art truth discovery methods. PMID:26705507

  6. LDEF data: Comparisons with existing models

    NASA Technical Reports Server (NTRS)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-01-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  7. Quantitative vortex models of turbulence

    NASA Astrophysics Data System (ADS)

    Pullin, D. I.

    2001-11-01

    This presentation will review attempts to develop models of turbulence, based on compact vortex elements, that can be used both to obtain quantitative estimates of various statistical properties of turbulent fine scales and also to formulate subgrid-transport models for large-eddy simulation (LES). Attention will be focused on a class of stretched-vortex models. Following a brief review of prior work, recent studies of vortex-based modeling of the small-scale behavior of a passive scalar will be discussed. The large-wavenumber spectrum of a passive scalar undergoing mixing by the velocity field of a stretched-spiral vortex will be shown to consist of the sum of two classical power laws, a k-1 Batchelor spectrum for wavenumbers up to the inverse Batchelor scale, and a k-5/3 Obukov-Corrsin spectrum for wavenumbers less than the inverse Kolmogorov scale (joint work with T.S. Lundgren). We will then focus on the use of stretched vortices as the basic subgrid structure in subgrid-scale (SGS) modeling for LES of turbulent flows. An SGS stress model and a vortex-based scalar-flux model for the LES of flows with turbulent mixing will be outlined. Application of these models to the LES of decaying turbulence, channel flow, the mixing of a passive scalar by homogeneous turbulence in the presence of a mean scalar gradient, and to the LES of compressible turbulence will be described.

  8. Interpreting snowpack radiometry using currently existing microwave radiative transfer models

    NASA Astrophysics Data System (ADS)

    Kang, Do-Hyuk; Tang, Shurun; Kim, Edward J.

    2015-10-01

    A radiative transfer model (RTM) to calculate the snow brightness temperatures (Tb) is a critical element in terrestrial snow parameter retrieval from microwave remote sensing observations. The RTM simulates the Tb based on a layered snow by solving a set of microwave radiative transfer equations. Even with the same snow physical inputs to drive the RTM, currently existing models such as Microwave Emission Model of Layered Snowpacks (MEMLS), Dense Media Radiative Transfer (DMRT-QMS), and Helsinki University of Technology (HUT) models produce different Tb responses. To backwardly invert snow physical properties from the Tb, differences from RTMs are first to be quantitatively explained. To this end, this initial investigation evaluates the sources of perturbations in these RTMs, and reveals the equations where the variations are made among the three models. Modelling experiments are conducted by providing the same but gradual changes in snow physical inputs such as snow grain size, and snow density to the 3 RTMs. Simulations are conducted with the frequencies consistent with the Advanced Microwave Scanning Radiometer- E (AMSR-E) at 6.9, 10.7, 18.7, 23.8, 36.5, and 89.0 GHz. For realistic simulations, the 3 RTMs are simultaneously driven by the same snow physics model with the meteorological forcing datasets and are validated against the snow insitu samplings from the CLPX (Cold Land Processes Field Experiment) 2002-2003, and NoSREx (Nordic Snow Radar Experiment) 2009-2010.

  9. Interpreting snowpack radiometry using currently existing microwave radiative transfer models

    NASA Astrophysics Data System (ADS)

    Kang, D. H.; Tan, S.; Kim, E. J.

    2015-12-01

    A radiative transfer model (RTM) to calculate a snow brightness temperature (Tb) is a critical element to retrieve terrestrial snow from microwave remote sensing observations. The RTM simulates the Tb based on a layered snow by solving a set of microwave radiative transfer formulas. Even with the same snow physical inputs used for the RTM, currently existing models such as Microwave Emission Model of Layered Snowpacks (MEMLS), Dense Media Radiative Transfer (DMRT-Tsang), and Helsinki University of Technology (HUT) models produce different Tb responses. To backwardly invert snow physical properties from the Tb, the differences from the RTMs are to be quantitatively explained. To this end, the paper evaluates the sources of perturbations in the RTMs, and reveals the equations where the variations are made among three models. Investigations are conducted by providing the same but gradual changes in snow physical inputs such as snow grain size, and snow density to the 3 RTMs. Simulations are done with the frequencies consistent with the Advanced Microwave Scanning Radiometer-E (AMSR-E) at 6.9, 10.7, 18.7, 23.8, 36.5, and 89.0 GHz. For realistic simulations, the 3 RTMs are simultaneously driven by the same snow physics model with the meteorological forcing datasets and are validated from the snow core samplings from the CLPX (Cold Land Processes Field Experiment) 2002-2003, and NoSREx (Nordic Snow Radar Experiment) 2009-2010.

  10. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  11. Progress on Quantitative Modeling of rf Sheaths

    NASA Astrophysics Data System (ADS)

    D'Ippolito, D. A.; Myra, J. R.; Kohno, H.; Wright, J. C.

    2011-12-01

    A new quantitative approach for computing the rf sheath potential is described, which incorporates plasma dielectric effects and the relative geometry of the magnetic field and the material boundaries. The new approach uses a modified boundary condition ("rf sheath BC") that couples the rf waves and the sheaths at the boundary. It treats the sheath as a thin vacuum region and matches the fields across the plasma-vacuum boundary. When combined with the Child-Langmuir Law (relating the sheath width and sheath potential), the model permits a self-consistent determination of the sheath parameters and the rf electric field at the sheath-plasma boundary. Semi-analytic models using this BC predict a number of general features, including a sheath voltage threshold, a dimensionless parameter characterizing rf sheath effects, and the existence of sheath plasma waves with an associated resonance. Since the sheath BC is nonlinear and dependent on geometry, computing the sheath potential numerically is a challenging computational problem. Numerical results will be presented from a new parallel-processing finite-element rf wave code for the tokamak scrape-off layer (called "rfSOL"). The code has verified the physics predicted by analytic theory in 1D, and extended the solutions into model 2D geometries. The numerical calculations confirm the existence of multiple roots and hysteresis effects, and parameter studies have been carried out. Areas for future work will be discussed.

  12. Integrated Environmental Modeling: Quantitative Microbial Risk Assessment

    EPA Science Inventory

    The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...

  13. Quantitative structure - mesothelioma potency model ...

    EPA Pesticide Factsheets

    Cancer potencies of mineral and synthetic elongated particle (EP) mixtures, including asbestos fibers, are influenced by changes in fiber dose composition, bioavailability, and biodurability in combination with relevant cytotoxic dose-response relationships. A unique and comprehensive rat intra-pleural (IP) dose characterization data set with a wide variety of EP size, shape, crystallographic, chemical, and bio-durability properties facilitated extensive statistical analyses of 50 rat IP exposure test results for evaluation of alternative dose pleural mesothelioma response models. Utilizing logistic regression, maximum likelihood evaluations of thousands of alternative dose metrics based on hundreds of individual EP dimensional variations within each test sample, four major findings emerged: (1) data for simulations of short-term EP dose changes in vivo (mild acid leaching) provide superior predictions of tumor incidence compared to non-acid leached data; (2) sum of the EP surface areas (ÓSA) from these mildly acid-leached samples provides the optimum holistic dose response model; (3) progressive removal of dose associated with very short and/or thin EPs significantly degrades resultant ÓEP or ÓSA dose-based predictive model fits, as judged by Akaike’s Information Criterion (AIC); and (4) alternative, biologically plausible model adjustments provide evidence for reduced potency of EPs with length/width (aspect) ratios 80 µm. Regar

  14. 6 Principles for Quantitative Reasoning and Modeling

    ERIC Educational Resources Information Center

    Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye

    2014-01-01

    Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…

  15. Pleiotropic Models of Quantitative Variation

    PubMed Central

    Barton, N. H.

    1990-01-01

    It is widely held that each gene typically affects many characters, and that each character is affected by many genes. Moreover, strong stabilizing selection cannot act on an indefinitely large number of independent traits. This makes it likely that heritable variation in any one trait is maintained as a side effect of polymorphisms which have nothing to do with selection on that trait. This paper examines the idea that variation is maintained as the pleiotropic side effect of either deleterious mutation, or balancing selection. If mutation is responsible, it must produce alleles which are only mildly deleterious (s & 10(-3)), but nevertheless have significant effects on the trait. Balancing selection can readily maintain high heritabilities; however, selection must be spread over many weakly selected polymorphisms if large responses to artificial selection are to be possible. In both classes of pleiotropic model, extreme phenotypes are less fit, giving the appearance of stabilizing selection on the trait. However, it is shown that this effect is weak (of the same order as the selection on each gene): the strong stabilizing selection which is often observed is likely to be caused by correlations with a limited number of directly selected traits. Possible experiments for distinguishing the alternatives are discussed. PMID:2311921

  16. Global existence for a degenerate haptotaxis model of cancer invasion

    NASA Astrophysics Data System (ADS)

    Zhigun, Anna; Surulescu, Christina; Uatay, Aydar

    2016-12-01

    We propose and study a strongly coupled PDE-ODE system with tissue-dependent degenerate diffusion and haptotaxis that can serve as a model prototype for cancer cell invasion through the extracellular matrix. We prove the global existence of weak solutions and illustrate the model behavior by numerical simulations for a two-dimensional setting.

  17. Is It Possible to Prove the Existence of an Aging Program by Quantitative Analysis of Mortality Dynamics?

    PubMed

    Shilovsky, G A; Putyatina, T S; Lysenkov, S N; Ashapkin, V V; Luchkina, O S; Markov, A V; Skulachev, V P

    2016-12-01

    Accumulation of various types of lesions in the course of aging increases an organism's vulnerability and results in a monotonous elevation of mortality rate, irrespective of the position of a species on the evolutionary tree. Stroustrup et al. (Nature, 530, 103-107) [1] showed in 2016 that in the nematode Caenorhabditis elegans, longevity-altering factors (e.g. oxidative stress, temperature, or diet) do not change the shape of the survival curve, but either stretch or shrink it along the time axis, which the authors attributed to the existence of an "aging program". Modification of the accelerated failure time model by Stroustrup et al. uses temporal scaling as a basic approach for distinguishing between quantitative and qualitative changes in aging dynamics. Thus we analyzed data on the effects of various longevity-increasing genetic manipulations in flies, worms, and mice and used several models to choose a theory that would best fit the experimental results. The possibility to identify the moment of switch from a mortality-governing pathway to some other pathways might be useful for testing geroprotective drugs. In this work, we discuss this and other aspects of temporal scaling.

  18. Training of Existing Workers: Issues, Incentives and Models. Support Document

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This document was produced by the authors based on their research for the report, "Training of Existing Workers: Issues, Incentives and Models," (ED495138) and is an added resource for further information. This support document is divided into the following sections: (1) The Retail Industry--A Snapshot; (2) Case Studies--Hardware, Retail…

  19. Facilities Management of Existing School Buildings: Two Models.

    ERIC Educational Resources Information Center

    Building Technology, Inc., Silver Spring, MD.

    While all school districts are responsible for the management of their existing buildings, they often approach the task in different ways. This document presents two models that offer ways a school district administration, regardless of size, may introduce activities into its ongoing management process that will lead to improvements in earthquake…

  20. Quantitative modeling of soil genesis processes

    NASA Technical Reports Server (NTRS)

    Levine, E. R.; Knox, R. G.; Kerber, A. G.

    1992-01-01

    For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.

  1. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  2. Small data global existence for a fluid-structure model

    NASA Astrophysics Data System (ADS)

    Ignatova, Mihaela; Kukavica, Igor; Lasiecka, Irena; Tuffaha, Amjad

    2017-02-01

    We address the system of partial differential equations modeling motion of an elastic body inside an incompressible fluid. The fluid is modeled by the incompressible Navier-Stokes equations while the structure is represented by the damped wave equation with interior damping. The additional boundary stabilization γ, considered in our previous paper, is no longer necessary. We prove the global existence and exponential decay of solutions for small initial data in a suitable Sobolev space.

  3. The existence of amorphous phase in Portland cements: Physical factors affecting Rietveld quantitative phase analysis

    SciTech Connect

    Snellings, Ruben Bazzoni, Amélie Scrivener, Karen

    2014-05-01

    Rietveld quantitative phase analysis has become a widespread tool for the characterization of Portland cement, both for research and production control purposes. One of the major remaining points of debate is whether Portland cements contain amorphous content or not. This paper presents detailed analyses of the amorphous phase contents in a set of commercial Portland cements, clinker, synthetic alite and limestone by Rietveld refinement of X-ray powder diffraction measurements using both external and internal standard methods. A systematic study showed that the sample preparation and comminution procedure is closely linked to the calculated amorphous contents. Particle size reduction by wet-grinding lowered the calculated amorphous contents to insignificant quantities for all materials studied. No amorphous content was identified in the final analysis of the Portland cements under investigation.

  4. Quantitative Microbiologic Models for Preterm Delivery

    PubMed Central

    Onderdonk, Andrew B.; Lee, Mei-Ling; Lieberman, Ellice; Delaney, Mary L.; Tuomala, Ruth E.

    2003-01-01

    Preterm delivery (PTD) is the leading cause of infant morbidity and mortality in the United States. An epidemiological association between PTD and various bacteria that are part of the vaginal microflora has been reported. No single bacterial species has been identified as being causally associated with PTD, suggesting a multifactorial etiology. Quantitative microbiologic cultures have been used previously to define normal vaginal microflora in a predictive model. These techniques have been applied to vaginal swab cultures from pregnant women in an effort to develop predictive microbiologic models for PTD. Logistic regression analysis with microbiologic information was performed for various risk groups, and the probability of a PTD was calculated for each subject. Four predictive models were generated by using the quantitative microbiologic data. The area under the curve (AUC) for the receiver operating curves ranged from 0.74 to 0.94, with confidence intervals (CI) ranging from 0.62 to 1. The model for the previous PTD risk group with the highest percentage of PTDs had an AUC of 0.91 (CI, 0.79 to 1). It may be possible to predict PTD by using microbiologic risk factors measured once the gestation period has reached the 20-week time point. PMID:12624032

  5. Quantitative risk modelling for new pharmaceutical compounds.

    PubMed

    Tang, Zhengru; Taylor, Mark J; Lisboa, Paulo; Dyas, Mark

    2005-11-15

    The process of discovering and developing new drugs is long, costly and risk-laden. Faced with a wealth of newly discovered compounds, industrial scientists need to target resources carefully to discern the key attributes of a drug candidate and to make informed decisions. Here, we describe a quantitative approach to modelling the risk associated with drug development as a tool for scenario analysis concerning the probability of success of a compound as a potential pharmaceutical agent. We bring together the three strands of manufacture, clinical effectiveness and financial returns. This approach involves the application of a Bayesian Network. A simulation model is demonstrated with an implementation in MS Excel using the modelling engine Crystal Ball.

  6. Existence of needle crystals in local models of solidification

    NASA Technical Reports Server (NTRS)

    Langer, J. S.

    1986-01-01

    The way in which surface tension acts as a singular perturbation to destroy the continuous family of needle-crystal solutions of the steady-state growth equations is analyzed in detail for two local models of solidification. All calculations are performed in the limit of small surface tension or, equivalently, small velocity. The basic mathematical ideas are introduced in connection with a quasilinear, isotropic version of the geometrical model of Brower et al., in which case the continuous family of solutions dissappears completely. The formalism is then applied to a simplified boundary-layer model with an anisotropic kinetic attachment coefficient. In the latter case, the solvability condition for the existence of needle crystals can be satisfied whenever the coefficient of anisotropy is arbitrarily small but nonzero.

  7. Synthetic quantitative MRI through relaxometry modelling

    PubMed Central

    Mohammadi, Siawoosh; Weiskopf, Nikolaus

    2016-01-01

    Abstract Quantitative MRI (qMRI) provides standardized measures of specific physical parameters that are sensitive to the underlying tissue microstructure and are a first step towards achieving maps of biologically relevant metrics through in vivo histology using MRI. Recently proposed models have described the interdependence of qMRI parameters. Combining such models with the concept of image synthesis points towards a novel approach to synthetic qMRI, in which maps of fundamentally different physical properties are constructed through the use of biophysical models. In this study, the utility of synthetic qMRI is investigated within the context of a recently proposed linear relaxometry model. Two neuroimaging applications are considered. In the first, artefact‐free quantitative maps are synthesized from motion‐corrupted data by exploiting the over‐determined nature of the relaxometry model and the fact that the artefact is inconsistent across the data. In the second application, a map of magnetization transfer (MT) saturation is synthesized without the need to acquire an MT‐weighted volume, which directly leads to a reduction in the specific absorption rate of the acquisition. This feature would be particularly important for ultra‐high field applications. The synthetic MT map is shown to provide improved segmentation of deep grey matter structures, relative to segmentation using T 1‐weighted images or R 1 maps. The proposed approach of synthetic qMRI shows promise for maximizing the extraction of high quality information related to tissue microstructure from qMRI protocols and furthering our understanding of the interrelation of these qMRI parameters. PMID:27753154

  8. Magnetospheric mapping with quantitative geomagnetic field models

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Mead, G. D.

    1973-01-01

    The Mead-Fairfield geomagnetic field models were used to trace field lines between the outer magnetosphere and the earth's surface. The results are presented in terms of ground latitude and local time contours projected to the equatorial plane and into the geomagnetic tail. With these contours various observations can be mapped along field lines between high and low altitudes. Low altitudes observations of the polar cap boundary, the polar cusp, the energetic electron trapping boundary and the sunward convection region are projected to the equatorial plane and compared with the results of the model and with each other. The results provide quantitative support to the earlier suggestions that the trapping boundary is associated with the last closed field line in the sunward hemisphere, the polar cusp is associated with the region of the last closed field line, and the polar cap projects to the geomagnetic tail and has a low latitude boundary corresponding to the last closed field line.

  9. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  10. A Transformative Model for Undergraduate Quantitative Biology Education

    PubMed Central

    Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  11. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  12. Existence of the Entropy Solution for a Viscoelastic Model

    NASA Astrophysics Data System (ADS)

    Zhu, Changjiang

    1998-06-01

    In this paper, we consider the Cauchy problem for a viscoelastic model with relaxationut+σx=0, (σ-f(u))t+{1}/{δ} (σ-μf(u))=0with discontinuous, large initial data, where 0<μ<1,δ>0 are constants. When the system is nonstrictly hyperbolic, under the additional assumptionv0x∈L∞, the system is reduced to an inhomogeneous scalar balance law by employing the special form of the system itself. After introducing a definition of entropy solutions to the system, we prove the existence, uniqueness, and continuous dependence of the global entropy solution for the system. When the system is strictly hyperbolic, some special entropy pairs of the Lax type are constructed, in which the progression terms are functions of a single variable, and the necessary estimates for the major terms are obtained by using the theory of singular perturbation of the ordinary differential equations. The special entropy pairs are used to prove the existence of the global entropy solutions for the corresponding Cauchy problem by applying the method of compensated compactness

  13. Existence of Periodic Solutions for a Modified Growth Solow Model

    NASA Astrophysics Data System (ADS)

    Fabião, Fátima; Borges, Maria João

    2010-10-01

    In this paper we analyze the dynamic of the Solow growth model with a Cobb-Douglas production function. For this purpose, we consider that the labour growth rate, L'(t)/L(t), is a T-periodic function, for a fixed positive real number T. We obtain the closed form solutions for the fundamental Solow equation with the new description of L(t). Using notions of the qualitative theory of ordinary differential equations and nonlinear functional analysis, we prove that there exists one T-periodic solution for the Solow equation. From the economic point of view this is a new result which allows a more realistic interpretation of the stylized facts.

  14. First Principles Quantitative Modeling of Molecular Devices

    NASA Astrophysics Data System (ADS)

    Ning, Zhanyu

    In this thesis, we report theoretical investigations of nonlinear and nonequilibrium quantum electronic transport properties of molecular transport junctions from atomistic first principles. The aim is to seek not only qualitative but also quantitative understanding of the corresponding experimental data. At present, the challenges to quantitative theoretical work in molecular electronics include two most important questions: (i) what is the proper atomic model for the experimental devices? (ii) how to accurately determine quantum transport properties without any phenomenological parameters? Our research is centered on these questions. We have systematically calculated atomic structures of the molecular transport junctions by performing total energy structural relaxation using density functional theory (DFT). Our quantum transport calculations were carried out by implementing DFT within the framework of Keldysh non-equilibrium Green's functions (NEGF). The calculated data are directly compared with the corresponding experimental measurements. Our general conclusion is that quantitative comparison with experimental data can be made if the device contacts are correctly determined. We calculated properties of nonequilibrium spin injection from Ni contacts to octane-thiolate films which form a molecular spintronic system. The first principles results allow us to establish a clear physical picture of how spins are injected from the Ni contacts through the Ni-molecule linkage to the molecule, why tunnel magnetoresistance is rapidly reduced by the applied bias in an asymmetric manner, and to what extent ab initio transport theory can make quantitative comparisons to the corresponding experimental data. We found that extremely careful sampling of the two-dimensional Brillouin zone of the Ni surface is crucial for accurate results in such a spintronic system. We investigated the role of contact formation and its resulting structures to quantum transport in several molecular

  15. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  16. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  17. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  18. Toward quantitative modeling of silicon phononic thermocrystals

    NASA Astrophysics Data System (ADS)

    Lacatena, V.; Haras, M.; Robillard, J.-F.; Monfray, S.; Skotnicki, T.; Dubois, E.

    2015-03-01

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of "thermocrystals" or "nanophononic crystals" that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known "electron crystal-phonon glass" dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  19. Toward quantitative modeling of silicon phononic thermocrystals

    SciTech Connect

    Lacatena, V.; Haras, M.; Robillard, J.-F. Dubois, E.; Monfray, S.; Skotnicki, T.

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  20. Quantitative modeling of multiscale neural activity

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Rennie, Christopher J.

    2007-01-01

    The electrical activity of the brain has been observed for over a century and is widely used to probe brain function and disorders, chiefly through the electroencephalogram (EEG) recorded by electrodes on the scalp. However, the connections between physiology and EEGs have been chiefly qualitative until recently, and most uses of the EEG have been based on phenomenological correlations. A quantitative mean-field model of brain electrical activity is described that spans the range of physiological and anatomical scales from microscopic synapses to the whole brain. Its parameters measure quantities such as synaptic strengths, signal delays, cellular time constants, and neural ranges, and are all constrained by independent physiological measurements. Application of standard techniques from wave physics allows successful predictions to be made of a wide range of EEG phenomena, including time series and spectra, evoked responses to stimuli, dependence on arousal state, seizure dynamics, and relationships to functional magnetic resonance imaging (fMRI). Fitting to experimental data also enables physiological parameters to be infered, giving a new noninvasive window into brain function, especially when referenced to a standardized database of subjects. Modifications of the core model to treat mm-scale patchy interconnections in the visual cortex are also described, and it is shown that resulting waves obey the Schroedinger equation. This opens the possibility of classical cortical analogs of quantum phenomena.

  1. Quantitative assessment of computational models for retinotopic map formation

    PubMed Central

    Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2014-01-01

    ABSTRACT Molecular and activity‐based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity‐based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2‐EphA3 ki/ki, Isl2‐EphA3 ki/+, ephrin‐A2,A3,A5 triple knock‐out (TKO), and Math5 −/− (Atoh7). Two models successfully reproduced the extent of the Math5 −/− anteromedial projection, but only one of those could account for the collapse point in Isl2‐EphA3 ki/+. The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin‐A2,A3,A5 TKO phenotype, suggesting either an incomplete knock‐out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. © 2014 Wiley Periodicals, Inc. Develop Neurobiol 75: 641–666, 2015 PMID:25367067

  2. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals

    EPA Science Inventory

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This manuscript reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, inver...

  3. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    SciTech Connect

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  4. Quantitative analysis of numerical solvers for oscillatory biomolecular system models

    PubMed Central

    Quo, Chang F; Wang, May D

    2008-01-01

    Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible

  5. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    PubMed

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.

  6. Existing Soil Carbon Models Do Not Apply to Forested Wetlands.

    SciTech Connect

    Trettin, C C; Song, B; Jurgensen, M F; Li, C

    2001-09-14

    Evaluation of 12 widely used soil carbon models to determine applicability to wetland ecosystems. For any land area that includes wetlands, none of the individual models would produce reasonable simulations based on soil processes. Study presents a wetland soil carbon model framework based on desired attributes, the DNDC model and components of the CENTURY and WMEM models. Proposed synthesis would be appropriate when considering soil carbon dynamics at multiple spatial scales and where the land area considered includes both wetland and upland ecosystems.

  7. Training of Existing Workers: Issues, Incentives and Models

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This report presents issues associated with incentives for training existing workers in small to medium-sized firms, identified through a small sample of case studies from the retail, manufacturing, and building and construction industries. While the majority of employers recognise workforce skill levels are fundamental to the success of the…

  8. A quantitative risk assessment model for Salmonella and whole chickens.

    PubMed

    Oscar, Thomas P

    2004-06-01

    Existing data and predictive models were used to define the input settings of a previously developed but modified quantitative risk assessment model (QRAM) for Salmonella and whole chickens. The QRAM was constructed in an Excel spreadsheet and was simulated using @Risk. The retail-to-table pathway was modeled as a series of unit operations and associated pathogen events that included initial contamination at retail, growth during consumer transport, thermal inactivation during cooking, cross-contamination during serving, and dose response after consumption. Published data as well as predictive models for growth and thermal inactivation of Salmonella were used to establish input settings. Noncontaminated chickens were simulated so that the QRAM could predict changes in the incidence of Salmonella contamination. The incidence of Salmonella contamination changed from 30% at retail to 0.16% after cooking to 4% at consumption. Salmonella growth on chickens during consumer transport was the only pathogen event that did not impact the risk of salmonellosis. For the scenario simulated, the QRAM predicted 0.44 cases of salmonellosis per 100,000 consumers, which was consistent with recent epidemiological data that indicate a rate of 0.66-0.88 cases of salmonellosis per 100,000 consumers of chicken. Although the QRAM was in agreement with the epidemiological data, surrogate data and models were used, assumptions were made, and potentially important unit operations and pathogen events were not included because of data gaps and thus, further refinement of the QRAM is needed.

  9. A Quantitative Model of Expert Transcription Typing

    DTIC Science & Technology

    1993-03-08

    monitoring the accuracy of the typing...the deterioration of typing rate that occurs as the text is modified from normal prose to non -language or random...letters...for] non -alphabetical keys. (p. 6) Rumelhart and Norman also do not attempt to make zero-parameter quantitative predictions of typing...Salthouse’s two-choice reaction time task was somewhat non - standard: Stimuli were uppercase and lowercase versions of the letters L and R, and responses

  10. Continuous-time random walk models of DNA electrophoresis in a post array: part I. Evaluation of existing models.

    PubMed

    Olson, Daniel W; Ou, Jia; Tian, Mingwei; Dorfman, Kevin D

    2011-02-01

    Several continuous-time random walk (CTRW) models exist to predict the dynamics of DNA in micropost arrays, but none of them quantitatively describes the separation seen in experiments or simulations. In Part I of this series, we examine the assumptions underlying these models by observing single molecules of λ DNA during electrophoresis in a regular, hexagonal array of oxidized silicon posts. Our analysis takes advantage of a combination of single-molecule videomicroscopy and previous Brownian dynamics simulations. Using a custom-tracking program, we automatically identify DNA-post collisions and thus study a large ensemble of events. Our results show that the hold-up time and the distance between collisions for consecutive collisions are uncorrelated. The distance between collisions is a random variable, but it can be smaller than the minimum value predicted by existing models of DNA transport in post arrays. The current CTRW models correctly predict the exponential decay in the probability density of the collision hold-up times, but they fail to account for the influence of finite-sized posts on short hold-up times. The shortcomings of the existing models identified here motivate the development of a new CTRW approach, which is presented in Part II of this series.

  11. Geysers of Enceladus: Quantitative analysis of qualitative models

    NASA Astrophysics Data System (ADS)

    Brilliantov, Nikolai V.; Schmidt, Jürgen; Spahn, Frank

    2008-11-01

    Aspects of two qualitative models of Enceladus' dust plume - the so-called "Cold Faithful" [Porco, C.C., et al., 2006. Cassini observes the active south pole of Enceladus. Science 311, 1393-1401; Ingersoll, A.P., et al., 2006. Models of the Enceladus plumes. In: Bulletin of the American Astronomical Society, vol. 38, p. 508] and "Frigid Faithful" [Kieffer, S.W., et al., 2006. A clathrate reservoir hypothesis for Enceladus' south polar plume. Science 314, 1764; Gioia, G., et al., 2007. Unified model of tectonics and heat transport in a Frigid Enceladus. Proc. Natl. Acad. Sci. 104, 13578-13591] models - are analyzed quantitatively. The former model assumes an explosive boiling of subsurface liquid water, when pressure exerted by the ice crust is suddenly released due to an opening crack. In the latter model the existence of a deep shell of clathrates below Enceladus' south pole is conjectured; clathrates can decompose explosively when exposed to vacuum through a fracture in the outer icy shell. For the Cold Faithful model we estimate the maximal velocity of ice grains, originating from water splashing in explosive boiling. We find that for water near the triple point this velocity is far too small to explain the observed plume properties. For the Frigid Faithful model we consider the problem of momentum transfer from gas to ice particles. It arises since any change in the direction of the gas flow in the cracks of the shell requires re-acceleration of the entrained grains. While this effect may explain the observed speed difference of gas and grains if the gas evaporates from triple point temperature (273.15 K) [Schmidt, J., et al., 2008. Formation of Enceladus dust plume. Nature 451, 685], the low temperatures of the Frigid Faithful model (˜140-170K) imply a too dilute vapor to support the observed high particle fluxes in Enceladus' plume.

  12. Mathematical Existence Results for the Doi-Edwards Polymer Model

    NASA Astrophysics Data System (ADS)

    Chupin, Laurent

    2017-01-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  13. Existence of solutions for a host-parasite model

    NASA Astrophysics Data System (ADS)

    Milner, Fabio Augusto; Patton, Curtis Allan

    2001-12-01

    The sea bass Dicentrarchus labrax has several gill ectoparasites. Diplectanum aequans (Plathelminth, Monogenea) is one of these species. Under certain demographic conditions, this flat worm can trigger pathological problems, in particular in fish farms. The life cycle of the parasite is described and a model for the dynamics of its interaction with the fish is described and analyzed. The model consists of a coupled system of ordinary differential equations and one integro-differential equation.

  14. LDEF data correlation to existing NASA debris environment models

    NASA Technical Reports Server (NTRS)

    Atkinson, Dale R.; Allbrooks, Martha K.; Watts, Alan J.

    1991-01-01

    Since the Long Duration Exposure Facility was gravity gradient stabilized and did not rotate, the directional dependence of the flux can be easily distinguished. During the deintegration of LDEF, all impact features larger than 0.5 mm into aluminum were documented for diameters and locations. In addition, all diameters and locations of all impact features larger than 0.3 mm into Scheldahl G411500 thermal control blankets were also documented. This data, along with additional information collected from LDEF materials archived at NASA Johnson Space Center (JSC) on smaller features, will be compared with current meteoroid and debris models. This comparison will provide a validation of the models and will identify discrepancies between the models and the data.

  15. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  16. Determining if Instructional Delivery Model Differences Exist in Remedial English

    ERIC Educational Resources Information Center

    Carter, LaTanya Woods

    2012-01-01

    The purpose of this causal comparative study is to test the theory of no significant difference that compares pre- and post-test assessment scores, controlling for the instructional delivery model of online and face-to-face students at a Mid-Atlantic university. Online education and virtual distance learning programs have increased in popularity…

  17. Exploring Higher Education Business Models ("If Such a Thing Exists")

    ERIC Educational Resources Information Center

    Harney, John O.

    2013-01-01

    The global economic recession has caused students, parents, and policymakers to reevaluate personal and societal investments in higher education--and has prompted the realization that traditional higher ed "business models" may be unsustainable. Predicting a shakeout, most presidents expressed confidence for their own school's ability to…

  18. A Quantitative Software Risk Assessment Model

    NASA Technical Reports Server (NTRS)

    Lee, Alice

    2002-01-01

    This slide presentation reviews a risk assessment model as applied to software development. the presentation uses graphs to demonstrate basic concepts of software reliability. It also discusses the application to the risk model to the software development life cycle.

  19. A review: Quantitative models for lava flows on Mars

    NASA Technical Reports Server (NTRS)

    Baloga, S. M.

    1987-01-01

    The purpose of this abstract is to review and assess the application of quantitative models (Gratz numerical correlation model, radiative loss model, yield stress model, surface structure model, and kinematic wave model) of lava flows on Mars. These theoretical models were applied to Martian flow data to aid in establishing the composition of the lava or to determine other eruption conditions such as eruption rate or duration.

  20. LDEF data correlation to existing NASA debris environment models

    NASA Technical Reports Server (NTRS)

    Atkinson, Dale R.; Allbrooks, Martha K.; Watts, Alan J.

    1992-01-01

    The Long Duration Exposure Facility (LDEF) was recovered in January 1990, following 5.75 years exposure of about 130 sq. m to low-Earth orbit. About 25 sq. m of this surface area was aluminum 6061 T-6 exposed in every direction. In addition, about 17 sq. m of Scheldahl G411500 silver-Teflon thermal control blankets were exposed in 9 of the 12 directions. Since the LDEF was gravity gradient stabilized and did not rotate, the directional dependence of the flux can be easily distinguished. During the disintegration of the LDEF, all impact features larger than 0.5 mm into aluminum were documented for diameters and locations. In addition, the diameters and locations of all impact features larger than 0.3 mm into Scheldahl G411500 thermal control blankets were also documented. This data, along with additional information collected from LDEF materials will be compared with current meteoroid and debris models. This comparison will provide a validation of the models and will identify discrepancies between the models and the data.

  1. A Transformative Model for Undergraduate Quantitative Biology Education

    ERIC Educational Resources Information Center

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3)…

  2. The Impact of School Climate on Student Achievement in the Middle Schools of the Commonwealth of Virginia: A Quantitative Analysis of Existing Data

    ERIC Educational Resources Information Center

    Bergren, David Alexander

    2014-01-01

    This quantitative study was designed to be an analysis of the relationship between school climate and student achievement through the creation of an index of climate-factors (SES, discipline, attendance, and school size) for which publicly available data existed. The index that was formed served as a proxy measure of climate; it was analyzed…

  3. Quantitative Model of the Cerro Prieto Field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-01-21

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  4. Modeling with Young Students--Quantitative and Qualitative.

    ERIC Educational Resources Information Center

    Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis

    1999-01-01

    A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…

  5. Quantitative description and modeling of real networks

    NASA Astrophysics Data System (ADS)

    Capocci, Andrea; Caldarelli, Guido; de Los Rios, Paolo

    2003-10-01

    We present data analysis and modeling of two particular cases of study in the field of growing networks. We analyze World Wide Web data set and authorship collaboration networks in order to check the presence of correlation in the data. The results are reproduced with good agreement through a suitable modification of the standard Albert-Barabási model of network growth. In particular, intrinsic relevance of sites plays a role in determining the future degree of the vertex.

  6. Quantitative magnetospheric models: results and perspectives.

    NASA Astrophysics Data System (ADS)

    Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team

    Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed

  7. A Qualitative and Quantitative Evaluation of 8 Clear Sky Models.

    PubMed

    Bruneton, Eric

    2016-10-27

    We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy.

  8. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  9. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals.

    PubMed

    Gobas, Frank A P C; Burkhard, Lawrence P; Doucette, William J; Sappington, Keith G; Verbruggen, Eric M J; Hope, Bruce K; Bonnell, Mark A; Arnot, Jon A; Tarazona, Jose V

    2016-01-01

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This article reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, invertebrate, mammal, and avian species and for entire terrestrial food webs, including some that consider spatial factors. Limitations and gaps in terrestrial bioaccumulation modeling include the lack of QSARs for biotransformation and dietary assimilation efficiencies for terrestrial species; the lack of models and QSARs for important terrestrial species such as insects, amphibians and reptiles; the lack of standardized testing protocols for plants with limited development of plant models; and the limited chemical domain of existing bioaccumulation models and QSARs (e.g., primarily applicable to nonionic organic chemicals). There is an urgent need for high-quality field data sets for validating models and assessing their performance. There is a need to improve coordination among laboratory, field, and modeling efforts on bioaccumulative substances in order to improve the state of the science for challenging substances.

  10. Relevance of MTF and NPS in quantitative CT: towards developing a predictable model of quantitative performance

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Richard, Samuel; Samei, Ehsan

    2012-03-01

    The quantification of lung nodule volume based on CT images provides valuable information for disease diagnosis and staging. However, the precision of the quantification is protocol, system, and technique dependent and needs to be evaluated for each specific case. To efficiently investigate the quantitative precision and find an optimal operating point, it is important to develop a predictive model based on basic system parameters. In this study, a Fourier-based metric, the estimability index (e') was proposed as such a predictor, and validated across a variety of imaging conditions. To first obtain the ground truth of quantitative precision, an anthropomorphic chest phantom with synthetic spherical nodules were imaged on a 64 slice CT scanner across a range of protocols (five exposure levels and two reconstruction algorithms). The volumes of nodules were quantified from the images using clinical software, with the precision of the quantification calculated for each protocol. To predict the precision, e' was calculated for each protocol based on several Fourier-based figures of merit, which modeled the characteristic of the quantitation task and the imaging condition (resolution, noise, etc.) of a particular protocol. Results showed a strong correlation (R2=0.92) between the measured and predicted precision across all protocols, indicating e' as an effective predictor of the quantitative precision. This study provides a useful framework for quantification-oriented optimization of CT protocols.

  11. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  12. Refining the quantitative pathway of the Pathways to Mathematics model.

    PubMed

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task.

  13. Quantitative measurement and modeling of sensitization development in stainless steel

    SciTech Connect

    Bruemmer, S.M.; Atteridge, D.G.

    1992-09-01

    The state-of-the-art to quantitatively measure and model sensitization development in austenitic stainless steels is assessed and critically analyzed. A modeling capability is evolved and validated using a diverse experimental data base. Quantitative predictions are demonstrated for simple and complex thermal and thermomechanical treatments. Commercial stainless steel heats ranging from high-carbon Type 304 and 316 to low-carbon Type 304L and 316L have been examined including many heats which correspond to extra-low-carbon, nuclear-grade compositions. Within certain limits the electrochemical potentiokinetic reactivation (EPR) test was found to give accurate and reproducible measurements of the degree of sensitization (DOS) in Type 304 and 316 stainless steels. EPR test results are used to develop the quantitative data base and evolve/validate the quantitative modeling capability. This thesis represents a first step to evolve methods for the quantitative assessment of structural reliability in stainless steel components and weldments. Assessments will be based on component-specific information concerning material characteristics, fabrication history and service exposure. Methods will enable fabrication (e.g., welding and repair welding) procedures and material aging effects to be evaluated and ensure adequate cracking resistance during the service lifetime of reactor components. This work is being conducted by the Oregon Graduate Institute with interactive input from personnel at Pacific Northwest Laboratory.

  14. Steady-state existence of passive vector fields under the Kraichnan model.

    PubMed

    Arponen, Heikki

    2010-03-01

    The steady-state existence problem for Kraichnan advected passive vector models is considered for isotropic and anisotropic initial values in arbitrary dimension. The models include the magnetohydrodynamic (MHD) equations, linear pressure model, and linearized Navier-Stokes (LNS) equations. In addition to reproducing the previously known results for the MHD model, we obtain the values of the Kraichnan model roughness parameter xi for which the LNS steady state exists.

  15. Existence of almost periodic solution of a model of phytoplankton allelopathy with delay

    NASA Astrophysics Data System (ADS)

    Abbas, Syed; Mahto, Lakshman

    2012-09-01

    In this paper we discuss a non-autonomous two species competitive allelopathic phytoplankton model in which both species are producing chemical which stimulate the growth of each other. We have studied the existence and uniqueness of an almost periodic solution for the concerned model system. Sufficient conditions are derived for the existence of a unique almost periodic solution.

  16. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  17. PeptideDepot: Flexible Relational Database for Visual Analysis of Quantitative Proteomic Data and Integration of Existing Protein Information

    PubMed Central

    Yu, Kebing; Salomon, Arthur R.

    2010-01-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through tandem mass spectrometry (MS/MS). Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to a variety of experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our High Throughput Autonomous Proteomic Pipeline (HTAPP) used in the automated acquisition and post-acquisition analysis of proteomic data. PMID:19834895

  18. Assessment of existing IEC models and a proposed new approach for modeling gridded systems

    SciTech Connect

    Nadler, J.H.; Knoll, D.A.

    1995-12-31

    This paper assesses the capabilities of existing computer models for Inertial-Electrostatic Confinement (IEC) systems in both the glow discharge mode of operation and the low-pressure, multiple-grid systems. A comparison is made of the present computer simulations of generic IEC devices to today`s running gridded-IEC experiments, based on assumptions used in the models to the physical parameters of the experiments. The pros and cons of such an approach is argued, and a list of critical parameters for a more realistic solution is offered. In addition, an alternate approach to developing a self-consistent model for these modes of operation with gridded systems is proposed.

  19. A GPGPU accelerated modeling environment for quantitatively characterizing karst systems

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Covington, M. D.; Luhmann, A. J.; Saar, M. O.

    2011-12-01

    The ability to derive quantitative information on the geometry of karst aquifer systems is highly desirable. Knowing the geometric makeup of a karst aquifer system enables quantitative characterization of the systems response to hydraulic events. However, the relationship between flow path geometry and karst aquifer response is not well understood. One method to improve this understanding is the use of high speed modeling environments. High speed modeling environments offer great potential in this regard as they allow researchers to improve their understanding of the modeled karst aquifer through fast quantitative characterization. To that end, we have implemented a finite difference model using General Purpose Graphics Processing Units (GPGPUs). GPGPUs are special purpose accelerators which are capable of high speed and highly parallel computation. The GPGPU architecture is a grid like structure, making it is a natural fit for structured systems like finite difference models. To characterize the highly complex nature of karst aquifer systems our modeling environment is designed to use an inverse method to conduct the parameter tuning. Using an inverse method reduces the total amount of parameter space needed to produce a set of parameters describing a system of good fit. Systems of good fit are determined with a comparison to reference storm responses. To obtain reference storm responses we have collected data from a series of data-loggers measuring water depth, temperature, and conductivity at locations along a cave stream with a known geometry in southeastern Minnesota. By comparing the modeled response to those of the reference responses the model parameters can be tuned to quantitatively characterize geometry, and thus, the response of the karst system.

  20. Lessons learned from quantitative dynamical modeling in systems biology.

    PubMed

    Raue, Andreas; Schilling, Marcel; Bachmann, Julie; Matteson, Andrew; Schelker, Max; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D; Theis, Fabian J; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here.

  1. Wave propagation models for quantitative defect detection by ultrasonic methods

    NASA Astrophysics Data System (ADS)

    Srivastava, Ankit; Bartoli, Ivan; Coccia, Stefano; Lanza di Scalea, Francesco

    2008-03-01

    Ultrasonic guided wave testing necessitates of quantitative, rather than qualitative, information on flaw size, shape and position. This quantitative diagnosis ability can be used to provide meaningful data to a prognosis algorithm for remaining life prediction, or simply to generate data sets for a statistical defect classification algorithm. Quantitative diagnostics needs models able to represent the interaction of guided waves with various defect scenarios. One such model is the Global-Local (GL) method, which uses a full finite element discretization of the region around a flaw to properly represent wave diffraction, and a suitable set of wave functions to simulate regions away from the flaw. Displacement and stress continuity conditions are imposed at the boundary between the global and the local regions. In this paper the GL method is expanded to take advantage of the Semi-Analytical Finite Element (SAFE) method in the global portion of the waveguide. The SAFE method is efficient because it only requires the discretization of the cross-section of the waveguide to obtain the wave dispersion solutions and it can handle complex structures such as multilayered sandwich panels. The GL method is applied to predicting quantitatively the interaction of guided waves with defects in aluminum and composites structural components.

  2. Existing Models of Maternal Death Surveillance Systems: Protocol for a Scoping Review

    PubMed Central

    Shahabuddin, ASM; Zhang, Wei Hong; Firoz, Tabassum; Englert, Yvon; Nejjari, Chakib; De Brouwere, Vincent

    2016-01-01

    Background Maternal mortality measurement remains a critical challenge, particularly in low and middle income countries (LMICs) where little or no data are available and maternal mortality and morbidity are often the highest in the world. Despite the progress made in data collection, underreporting and translating the results into action are two major challenges that maternal death surveillance systems (MDSSs) face in LMICs. Objective This paper presents a protocol for a scoping review aimed at synthesizing the existing models of MDSSs and factors that influence their completeness and usefulness. Methods The methodology for scoping reviews from the Joanna Briggs Institute was used as a guide for developing this protocol. A comprehensive literature search will be conducted across relevant electronic databases. We will include all articles that describe MDSSs or assess their completeness or usefulness. At least two reviewers will independently screen all articles, and discrepancies will be resolved through discussion. The same process will be used to extract data from studies fulfilling the eligibility criteria. Data analysis will involve quantitative and qualitative methods. Results Currently, the abstracts screening is under way and the first results are expected to be publicly available by mid-2017. The synthesis of the reviewed materials will be presented in tabular form completed by a narrative description. The results will be classified in main conceptual categories that will be obtained during the results extraction. Conclusions We anticipate that the results will provide a broad overview of MDSSs and describe factors related to their completeness and usefulness. The results will allow us to identify research gaps concerning the barriers and facilitating factors facing MDSSs. Results will be disseminated through publication in a peer-reviewed journal and conferences as well as domestic and international agencies in charge of implementing MDSS. PMID:27729305

  3. Quantitative Systems Pharmacology: A Case for Disease Models

    PubMed Central

    Ramanujan, S; Schmidt, BJ; Ghobrial, OG; Lu, J; Heatherington, AC

    2016-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model‐informed drug discovery and development, supporting program decisions from exploratory research through late‐stage clinical trials. In this commentary, we discuss the unique value of disease‐scale “platform” QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. PMID:27709613

  4. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  5. Autonomous Temperature Data Acquisition Compared to Existing Thermal Models of Different Sediments

    NASA Astrophysics Data System (ADS)

    Jackson, R. G.; Sanders, N. H.; Ward, C. C.; Ward, F. R.; Benson, S. M.; Lee, N. F.

    2010-03-01

    Our team wanted to improve sampling methods for experiments on the thermal properties of martian sediments. We built a robot that could take the data autonomously over a period of days, and then compared them to existing models.

  6. Modeling the Earth's radiation belts. A review of quantitative data based electron and proton models

    NASA Technical Reports Server (NTRS)

    Vette, J. I.; Teague, M. J.; Sawyer, D. M.; Chan, K. W.

    1979-01-01

    The evolution of quantitative models of the trapped radiation belts is traced to show how the knowledge of the various features has developed, or been clarified, by performing the required analysis and synthesis. The Starfish electron injection introduced problems in the time behavior of the inner zone, but this residue decayed away, and a good model of this depletion now exists. The outer zone electrons were handled statistically by a log normal distribution such that above 5 Earth radii there are no long term changes over the solar cycle. The transition region between the two zones presents the most difficulty, therefore the behavior of individual substorms as well as long term changes must be studied. The latest corrections to the electron environment based on new data are outlined. The proton models have evolved to the point where the solar cycle effect at low altitudes is included. Trends for new models are discussed; the feasibility of predicting substorm injections and solar wind high-speed streams make the modeling of individual events a topical activity.

  7. Quantitative phase-field modeling of dendritic electrodeposition

    NASA Astrophysics Data System (ADS)

    Cogswell, Daniel A.

    2015-07-01

    A thin-interface phase-field model of electrochemical interfaces is developed based on Marcus kinetics for concentrated solutions, and used to simulate dendrite growth during electrodeposition of metals. The model is derived in the grand electrochemical potential to permit the interface to be widened to reach experimental length and time scales, and electroneutrality is formulated to eliminate the Debye length. Quantitative agreement is achieved with zinc Faradaic reaction kinetics, fractal growth dimension, tip velocity, and radius of curvature. Reducing the exchange current density is found to suppress the growth of dendrites, and screening electrolytes by their exchange currents is suggested as a strategy for controlling dendrite growth in batteries.

  8. Quantitative magnetospheric models derived from spacecraft magnetometer data

    NASA Technical Reports Server (NTRS)

    Mead, G. D.; Fairfield, D. H.

    1973-01-01

    Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.

  9. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    SciTech Connect

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-07-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  10. Possibility of quantitative prediction of cavitation erosion without model test

    SciTech Connect

    Kato, Hiroharu; Konno, Akihisa; Maeda, Masatsugu; Yamaguchi, Hajime

    1996-09-01

    A scenario for quantitative prediction of cavitation erosion was proposed. The key value is the impact force/pressure spectrum on a solid surface caused by cavitation bubble collapse. As the first step of prediction, the authors constructed the scenario from an estimation of the cavity generation rate to the prediction of impact force spectrum, including the estimations of collapsing cavity number and impact pressure. The prediction was compared with measurements of impact force spectra on a partially cavitating hydrofoil. A good quantitative agreement was obtained between the prediction and the experiment. However, the present method predicted a larger effect of main flow velocity than that observed. The present scenario is promising as a method of predicting erosion without using a model test.

  11. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    PubMed Central

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  12. Functional Linear Models for Association Analysis of Quantitative Traits

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Mills, James L.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao

    2014-01-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. PMID:24130119

  13. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study.

  14. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code

    NASA Technical Reports Server (NTRS)

    Freeh, Josh

    2003-01-01

    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  15. On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.

    ERIC Educational Resources Information Center

    Fischer, Gerhard H.

    1981-01-01

    Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)

  16. A Team Mental Model Perspective of Pre-Quantitative Risk

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  17. Quantitative Modeling of Growth and Dispersal in Population Models.

    DTIC Science & Technology

    1986-01-01

    partial differential equations. Applications to dispersal and nonlinear growth/predation models arc dnsity- depresented . Computational iresults using...depend only on size x. The ideas we present here can be readily modified to treat theoretically and computationally the more general case where g and m

  18. A pleiotropic nonadditive model of variation in quantitative traits

    SciTech Connect

    Caballero, A.; Keightley, P.D.

    1994-11-01

    A model of mutation-selection-drift balance incorporating pleiotropic and dominance effects of new mutations on quantitative traits and fitness is investigated and used to predict the amount and nature of genetic variation maintained in segregating populations. The model is based on recent information on the joint distribution of mutant effects on bristle traits and fitness in Drosophila melanogaster from experiments on the accumulation of spontaneous and P element-induced mutations. Mutants of large effect tend to be partially recessive while those with smaller effect are on average additive, but apparently with very variable gene action. The model is parameterized with two different sets of information derived from P element insertion and spontaneous mutation data, though the latter are not fully known. They differ in the number of mutations per generation which is assumed to affect the trait. Predictions of the variance maintained for bristle number assuming parameters derived from effects of P element insertions fit reasonably well with experimental observations. The equilibrium genetic variance is nearly independent of the degree of dominance of new mutations. Heritabilities of between 0.4 and 0.6 are predicted with population sizes from 10{sup 4} to 10{sup 6}, and most of the variance for the metric trait in segregating populations is due to a small proportion of mutations with neutral or nearly neutral effects on fitness and intermediate effects on the trait. Much of the genetic variance is contributed by recessive or partially recessive mutants, but only a small proportion of the genetic variance is dominance variance. If a model is assumed in which all mutation events have an effect on the quantitative trait, the majority of the genetic variance is contributed by deleterious mutations with tiny effects on the trait. If such a model is assumed for variability, the heritability is about 0.1, independent of the population size. 83 refs., 8 figs., 8 tabs.

  19. A Systems Perspective on Situation Awareness I: Conceptual Framework, Modeling, and Quantitative Measurement

    DTIC Science & Technology

    2003-05-01

    A Systems Perspective on Situation Awareness I: Conceptual Framework , Modeling, and Quantitative Measurement Alex Kirlik (University of...I: Conceptual Framework , Modeling, and Quantitative Measurement 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...Systems Perspective on Situation Awareness I: Conceptual Framework , Modeling, and Quantitative Measurement ALEX KIRLIK Institute of Aviation

  20. Discrete modeling of hydraulic fracturing processes in a complex pre-existing fracture network

    NASA Astrophysics Data System (ADS)

    Kim, K.; Rutqvist, J.; Nakagawa, S.; Houseworth, J. E.; Birkholzer, J. T.

    2015-12-01

    Hydraulic fracturing and stimulation of fracture networks are widely used by the energy industry (e.g., shale gas extraction, enhanced geothermal systems) to increase permeability of geological formations. Numerous analytical and numerical models have been developed to help understand and predict the behavior of hydraulically induced fractures. However, many existing models assume simple fracturing scenarios with highly idealized fracture geometries (e.g., propagation of a single fracture with assumed shapes in a homogeneous medium). Modeling hydraulic fracture propagation in the presence of natural fractures and homogeneities can be very challenging because of the complex interactions between fluid, rock matrix, and rock interfaces, as well as the interactions between propagating fractures and pre-existing natural fractures. In this study, the TOUGH-RBSN code for coupled hydro-mechanical modeling is utilized to simulate hydraulic fracture propagation and its interaction with pre-existing fracture networks. The simulation tool combines TOUGH2, a simulator of subsurface multiphase flow and mass transport based on the finite volume approach, with the implementation of a lattice modeling approach for geomechanical and fracture-damage behavior, named Rigid-Body-Spring Network (RBSN). The discrete fracture network (DFN) approach is facilitated in the Voronoi discretization via a fully automated modeling procedure. The numerical program is verified through a simple simulation for single fracture propagation, in which the resulting fracture geometry is compared to an analytical solution for given fracture length and aperture. Subsequently, predictive simulations are conducted for planned laboratory experiments using rock-analogue (soda-lime glass) samples containing a designed, pre-existing fracture network. The results of a preliminary simulation demonstrate selective fracturing and fluid infiltration along the pre-existing fractures, with additional fracturing in part

  1. Quantitative model studies for interfaces in organic electronic devices

    NASA Astrophysics Data System (ADS)

    Gottfried, J. Michael

    2016-11-01

    In organic light-emitting diodes and similar devices, organic semiconductors are typically contacted by metal electrodes. Because the resulting metal/organic interfaces have a large impact on the performance of these devices, their quantitative understanding is indispensable for the further rational development of organic electronics. A study by Kröger et al (2016 New J. Phys. 18 113022) of an important single-crystal based model interface provides detailed insight into its geometric and electronic structure and delivers valuable benchmark data for computational studies. In view of the differences between typical surface-science model systems and real devices, a ‘materials gap’ is identified that needs to be addressed by future research to make the knowledge obtained from fundamental studies even more beneficial for real-world applications.

  2. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  3. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  4. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    NASA Astrophysics Data System (ADS)

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-01

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model [1], using the finite-difference electromagnetic Vorpal/Vsim software [2]. This model has been augmented with a non-linear rf-sheath sub-grid model [3], which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves [4] in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques [5] to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion

  5. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    SciTech Connect

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-12

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model, using the finite-difference electromagnetic Vorpal/Vsim software. This model has been augmented with a non-linear rf-sheath sub-grid model, which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion, existing at the

  6. Existence and uniqueness of solutions from the LEAP equilibrium energy-economy model

    SciTech Connect

    Oblow, E.M.

    1982-10-01

    A study was made of the existence and uniqueness of solutions to the long-range, energy-economy model LEAP. The code is a large scale, long-range (50 year) equilibrium model of energy supply and demand in the US economy used for government and industrial forecasting. The study focused on the two features which distinguish LEAP from other equilibrium models - the treatment of product allocation and basic conversion of materials into an energy end product. Both allocation and conversion processes are modeled in a behavioral fashion which differs from classical economic paradigms. The results of the study indicate that while LEAP contains desirable behavioral features, these same features can give rise to non-uniqueness in the solution of allocation and conversion process equations. Conditions under which existence and uniqueness of solutions might not occur are developed in detail and their impact in practical applications are discussed.

  7. Existence of global weak solution for a reduced gravity two and a half layer model

    SciTech Connect

    Guo, Zhenhua Li, Zilai Yao, Lei

    2013-12-15

    We investigate the existence of global weak solution to a reduced gravity two and a half layer model in one-dimensional bounded spatial domain or periodic domain. Also, we show that any possible vacuum state has to vanish within finite time, then the weak solution becomes a unique strong one.

  8. Existence of standard models of conic fibrations over non-algebraically-closed fields

    SciTech Connect

    Avilov, A A

    2014-12-31

    We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.

  9. The existence of Newtonian analogs of a class of 5D Wesson's cosmological models

    NASA Astrophysics Data System (ADS)

    Waga, I.

    1992-07-01

    The conditions for the existence of Newtonian analogs of a five dimensional (5D) generalization of the Friedman-Robertson-Walker (FRW) cosmological models in Wesson's gravitational theory are re-analyzed. Contrarily to other claims, we show that classical analogs can be obtained for non-null cosmological constant and negative or null spatial curvature.

  10. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  11. Variability of model-free and model-based quantitative measures of EEG.

    PubMed

    Van Albada, Sacha J; Rennie, Christopher J; Robinson, Peter A

    2007-06-01

    Variable contributions of state and trait to the electroencephalographic (EEG) signal affect the stability over time of EEG measures, quite apart from other experimental uncertainties. The extent of intraindividual and interindividual variability is an important factor in determining the statistical, and hence possibly clinical significance of observed differences in the EEG. This study investigates the changes in classical quantitative EEG (qEEG) measures, as well as of parameters obtained by fitting frequency spectra to an existing continuum model of brain electrical activity. These parameters may have extra variability due to model selection and fitting. Besides estimating the levels of intraindividual and interindividual variability, we determined approximate time scales for change in qEEG measures and model parameters. This provides an estimate of the recording length needed to capture a given percentage of the total intraindividual variability. Also, if more precise time scales can be obtained in future, these may aid the characterization of physiological processes underlying various EEG measures. Heterogeneity of the subject group was constrained by testing only healthy males in a narrow age range (mean = 22.3 years, sd = 2.7). Eyes-closed EEGs of 32 subjects were recorded at weekly intervals over an approximately six-week period, of which 13 subjects were followed for a year. QEEG measures, computed from Cz spectra, were powers in five frequency bands, alpha peak frequency, and spectral entropy. Of these, theta, alpha, and beta band powers were most reproducible. Of the nine model parameters obtained by fitting model predictions to experiment, the most reproducible ones quantified the total power and the time delay between cortex and thalamus. About 95% of the maximum change in spectral parameters was reached within minutes of recording time, implying that repeat recordings are not necessary to capture the bulk of the variability in EEG spectra.

  12. Existence of Limit Cycles in the Solow Model with Delayed-Logistic Population Growth

    PubMed Central

    2014-01-01

    This paper is devoted to the existence and stability analysis of limit cycles in a delayed mathematical model for the economy growth. Specifically the Solow model is further improved by inserting the time delay into the logistic population growth rate. Moreover, by choosing the time delay as a bifurcation parameter, we prove that the system loses its stability and a Hopf bifurcation occurs when time delay passes through critical values. Finally, numerical simulations are carried out for supporting the analytical results. PMID:24592147

  13. Leveraging an existing data warehouse to annotate workflow models for operations research and optimization.

    PubMed

    Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O

    2008-11-06

    Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse.

  14. Existence of limit cycles in the Solow model with delayed-logistic population growth.

    PubMed

    Bianca, Carlo; Guerrini, Luca

    2014-01-01

    This paper is devoted to the existence and stability analysis of limit cycles in a delayed mathematical model for the economy growth. Specifically the Solow model is further improved by inserting the time delay into the logistic population growth rate. Moreover, by choosing the time delay as a bifurcation parameter, we prove that the system loses its stability and a Hopf bifurcation occurs when time delay passes through critical values. Finally, numerical simulations are carried out for supporting the analytical results.

  15. Mentoring for junior medical faculty: Existing models and suggestions for low-resource settings.

    PubMed

    Menon, Vikas; Muraleedharan, Aparna; Bhat, Ballambhattu Vishnu

    2016-02-01

    Globally, there is increasing recognition about the positive benefits and impact of mentoring on faculty retention rates, career satisfaction and scholarly output. However, emphasis on research and practice of mentoring is comparatively meagre in low and middle income countries. In this commentary, we critically examine two existing models of mentorship for medical faculty and offer few suggestions for an integrated hybrid model that can be adapted for use in low resource settings.

  16. Model-based quantitative laser Doppler flowmetry in skin

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2010-09-01

    Laser Doppler flowmetry (LDF) can be used for assessing the microcirculatory perfusion. However, conventional LDF (cLDF) gives only a relative perfusion estimate for an unknown measurement volume, with no information about the blood flow speed distribution. To overcome these limitations, a model-based analysis method for quantitative LDF (qLDF) is proposed. The method uses inverse Monte Carlo technique with an adaptive three-layer skin model. By analyzing the optimal model where measured and simulated LDF spectra detected at two different source-detector separations match, the absolute microcirculatory perfusion for a specified speed region in a predefined volume is determined. qLDF displayed errors <12% when evaluated using simulations of physiologically relevant variations in the layer structure, in the optical properties of static tissue, and in blood absorption. Inhomogeneous models containing small blood vessels, hair, and sweat glands displayed errors <5%. Evaluation models containing single larger blood vessels displayed significant errors but could be dismissed by residual analysis. In vivo measurements using local heat provocation displayed a higher perfusion increase with qLDF than cLDF, due to nonlinear effects in the latter. The qLDF showed that the perfusion increase occurred due to an increased amount of red blood cells with a speed >1 mm/s.

  17. Quantitative model of the growth of floodplains by vertical accretion

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2000-01-01

    A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.

  18. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    NASA Technical Reports Server (NTRS)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  19. Dynamics of childhood growth and obesity: development and validation of a quantitative mathematical model

    PubMed Central

    Hall, Kevin D; Butte, Nancy F; Swinburn, Boyd A; Chow, Carson C

    2013-01-01

    Summary Background Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. Methods We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative predictions about weight-management interventions. The model was calibrated to reference body composition data in healthy children and validated by comparing model predictions with data other than those used to build the model. Findings The model accurately simulated the changes in body composition and energy expenditure reported in reference data during healthy growth, and predicted increases in energy intake from ages 5–18 years of roughly 1200 kcal per day in boys and 900 kcal per day in girls. Development of childhood obesity necessitated a substantially greater excess energy intake than for development of adult obesity. Furthermore, excess energy intake in overweight and obese children calculated by the model greatly exceeded the typical energy balance calculated on the basis of growth charts. At the population level, the excess weight of US children in 2003–06 was associated with a mean increase in energy intake of roughly 200 kcal per day per child compared with similar children in 1976–80. The model also suggests that therapeutic windows when children can outgrow obesity without losing weight might exist, especially during periods of high growth potential in boys who are not severely obese. Interpretation This model quantifies the energy excess underlying obesity and calculates the necessary intervention magnitude to achieve bodyweight change in children. Policy makers and clinicians now have a quantitative technique for understanding the childhood obesity epidemic and planning interventions to control it. PMID:24349967

  20. Quantitative determination of guggulsterone in existing natural populations of Commiphora wightii (Arn.) Bhandari for identification of germplasm having higher guggulsterone content.

    PubMed

    Kulhari, Alpana; Sheorayan, Arun; Chaudhury, Ashok; Sarkar, Susheel; Kalia, Rajwant K

    2015-01-01

    Guggulsterone is an aromatic steroidal ketonic compound obtained from vertical rein ducts and canals of bark of Commiphora wightii (Arn.) Bhandari (Family - Burseraceae). Owing to its multifarious medicinal and therapeutic values as well as its various other significant bioactivities, guggulsterone has high demand in pharmaceutical, perfumery and incense industries. More and more pharmaceutical and perfumery industries are showing interest in guggulsterone, therefore, there is a need for its quantitative determination in existing natural populations of C. wightii. Identification of elite germplasm having higher guggulsterone content can be multiplied through conventional or biotechnological means. In the present study an effort was made to estimate two isoforms of guggulsterone i.e. E and Z guggulsterone in raw exudates of 75 accessions of C. wightii collected from three states of North-western India viz. Rajasthan (19 districts), Haryana (4 districts) and Gujarat (3 districts). Extracted steroid rich fraction from stem samples was fractionated using reverse-phase preparative High Performance Liquid Chromatography (HPLC) coupled with UV/VIS detector operating at wavelength of 250 nm. HPLC analysis of stem samples of wild as well as cultivated plants showed that the concentration of E and Z isomers as well as total guggulsterone was highest in Rajasthan, as compared to Haryana and Gujarat states. Highest concentration of E guggulsterone (487.45 μg/g) and Z guggulsterone (487.68 μg/g) was found in samples collected from Devikot (Jaisalmer) and Palana (Bikaner) respectively, the two hyper-arid regions of Rajasthan, India. Quantitative assay was presented on the basis of calibration curve obtained from a mixture of standard E and Z guggulsterones with different validatory parameters including linearity, selectivity and specificity, accuracy, auto-injector, flow-rate, recoveries, limit of detection and limit of quantification (as per norms of International

  1. A Quantitative Model to Estimate Drug Resistance in Pathogens

    PubMed Central

    Baker, Frazier N.; Cushion, Melanie T.; Porollo, Aleksey

    2016-01-01

    Pneumocystis pneumonia (PCP) is an opportunistic infection that occurs in humans and other mammals with debilitated immune systems. These infections are caused by fungi in the genus Pneumocystis, which are not susceptible to standard antifungal agents. Despite decades of research and drug development, the primary treatment and prophylaxis for PCP remains a combination of trimethoprim (TMP) and sulfamethoxazole (SMX) that targets two enzymes in folic acid biosynthesis, dihydrofolate reductase (DHFR) and dihydropteroate synthase (DHPS), respectively. There is growing evidence of emerging resistance by Pneumocystis jirovecii (the species that infects humans) to TMP-SMX associated with mutations in the targeted enzymes. In the present study, we report the development of an accurate quantitative model to predict changes in the binding affinity of inhibitors (Ki, IC50) to the mutated proteins. The model is based on evolutionary information and amino acid covariance analysis. Predicted changes in binding affinity upon mutations highly correlate with the experimentally measured data. While trained on Pneumocystis jirovecii DHFR/TMP data, the model shows similar or better performance when evaluated on the resistance data for a different inhibitor of PjDFHR, another drug/target pair (PjDHPS/SMX) and another organism (Staphylococcus aureus DHFR/TMP). Therefore, we anticipate that the developed prediction model will be useful in the evaluation of possible resistance of the newly sequenced variants of the pathogen and can be extended to other drug targets and organisms. PMID:28018911

  2. Towards Quantitative Spatial Models of Seabed Sediment Composition

    PubMed Central

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom’s parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  3. Quantitative analysis of cyclic beta-turn models.

    PubMed Central

    Perczel, A.; Fasman, G. D.

    1992-01-01

    The beta-turn is a frequently found structural unit in the conformation of globular proteins. Although the circular dichroism (CD) spectra of the alpha-helix and beta-pleated sheet are well defined, there remains some ambiguity concerning the pure component CD spectra of the different types of beta-turns. Recently, it has been reported (Hollósi, M., Kövér, K.E., Holly, S., Radics, L., & Fasman, G.D., 1987, Biopolymers 26, 1527-1572; Perczel, A., Hollósi, M., Foxman, B.M., & Fasman, G.D., 1991a, J. Am. Chem. Soc. 113, 9772-9784) that some pseudohexapeptides (e.g., the cyclo[(delta)Ava-Gly-Pro-Aaa-Gly] where Aaa = Ser, Ser(OtBu), or Gly) in many solvents adopt a conformational mixture of type I and the type II beta-turns, although the X-ray-determined conformation was an ideal type I beta-turn. In addition to these pseudohexapeptides, conformational analysis was also carried out on three pseudotetrapeptides and three pseudooctapeptides. The target of the conformation analysis reported herein was to determine whether the ring stress of the above beta-turn models has an influence on their conformational properties. Quantitative nuclear Overhauser effect (NOE) measurements yielded interproton distances. The conformational average distances so obtained were interpreted utilizing molecular dynamics (MD) simulations to yield the conformational percentages. These conformational ratios were correlated with the conformational weights obtained by quantitative CD analysis of the same compounds. The pure component CD curves of type I and type II beta-turns were also obtained, using a recently developed algorithm (Perczel, A., Tusnády, G., Hollósi, M., & Fasman, G.D., 1991b, Protein Eng. 4(6), 669-679). For the first time the results of a CD deconvolution, based on the CD spectra of 14 beta-turn models, were assigned by quantitative NOE results. The NOE experiments confirmed the ratios of the component curves found for the two major beta-turns by CD analysis. These results

  4. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  5. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  6. Monitoring with Trackers Based on Semi-Quantitative Models

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1997-01-01

    In three years of NASA-sponsored research preceding this project, we successfully developed a technology for: (1) building qualitative and semi-quantitative models from libraries of model-fragments, (2) simulating these models to predict future behaviors with the guarantee that all possible behaviors are covered, (3) assimilating observations into behaviors, shrinking uncertainty so that incorrect models are eventually refuted and correct models make stronger predictions for the future. In our object-oriented framework, a tracker is an object which embodies the hypothesis that the available observation stream is consistent with a particular behavior of a particular model. The tracker maintains its own status (consistent, superceded, or refuted), and answers questions about its explanation for past observations and its predictions for the future. In the MIMIC approach to monitoring of continuous systems, a number of trackers are active in parallel, representing alternate hypotheses about the behavior of a system. This approach is motivated by the need to avoid 'system accidents' [Perrow, 1985] due to operator fixation on a single hypothesis, as for example at Three Mile Island. As we began to address these issues, we focused on three major research directions that we planned to pursue over a three-year project: (1) tractable qualitative simulation, (2) semiquantitative inference, and (3) tracking set management. Unfortunately, funding limitations made it impossible to continue past year one. Nonetheless, we made major progress in the first two of these areas. Progress in the third area as slower because the graduate student working on that aspect of the project decided to leave school and take a job in industry. I enclosed a set of abstract of selected papers on the work describe below. Several papers that draw on the research supported during this period appeared in print after the grant period ended.

  7. Quantitative results for square gradient models of fluids

    NASA Astrophysics Data System (ADS)

    Kong, Ling-Ti; Vriesinga, Dan; Denniston, Colin

    2011-03-01

    Square gradient models for fluids are extensively used because they are believed to provide a good qualitative understanding of the essential physics. However, unlike elasticity theory for solids, there are few quantitative results for specific (as opposed to generic) fluids. Indeed the only numerical value of the square gradient coefficients for specific fluids have been inferred from attempts to match macroscopic properties such as surface tensions rather than from direct measurement. We employ all-atom molecular dynamics, using the TIP3P and OPLS force fields, to directly measure the coefficients of the density gradient expansion for several real fluids. For all liquids measured, including water, we find that the square gradient coefficient is negative, suggesting the need for some regularization of a model including only the square gradient, but only at wavelengths comparable to the molecular separation of molecules. The implications for liquid-gas interfaces are also examined. Remarkably, the square gradient model is found to give a reasonably accurate description of density fluctuations in the liquid state down to wavelengths close to atomic size.

  8. Mechanics of neutrophil phagocytosis: experiments and quantitative models.

    PubMed

    Herant, Marc; Heinrich, Volkmar; Dembo, Micah

    2006-05-01

    To quantitatively characterize the mechanical processes that drive phagocytosis, we observed the FcgammaR-driven engulfment of antibody-coated beads of diameters 3 mum to 11 mum by initially spherical neutrophils. In particular, the time course of cell morphology, of bead motion and of cortical tension were determined. Here, we introduce a number of mechanistic models for phagocytosis and test their validity by comparing the experimental data with finite element computations for multiple bead sizes. We find that the optimal models involve two key mechanical interactions: a repulsion or pressure between cytoskeleton and free membrane that drives protrusion, and an attraction between cytoskeleton and membrane newly adherent to the bead that flattens the cell into a thin lamella. Other models such as cytoskeletal expansion or swelling appear to be ruled out as main drivers of phagocytosis because of the characteristics of bead motion during engulfment. We finally show that the protrusive force necessary for the engulfment of large beads points towards storage of strain energy in the cytoskeleton over a large distance from the leading edge ( approximately 0.5 microm), and that the flattening force can plausibly be generated by the known concentrations of unconventional myosins at the leading edge.

  9. Quantitative rubber sheet models of gravitation wells using Spandex

    NASA Astrophysics Data System (ADS)

    White, Gary

    2008-04-01

    Long a staple of introductory treatments of general relativity, the rubber sheet model exhibits Wheeler's concise summary---``Matter tells space-time how to curve and space-time tells matter how to move''---very nicely. But what of the quantitative aspects of the rubber sheet model: how far can the analogy be pushed? We show^1 that when a mass M is suspended from the center of an otherwise unstretched elastic sheet affixed to a circular boundary it exhibits a distortion far from the center given by h = A*(M*r^2)^1/3 . Here, as might be expected, h and r are the vertical and axial distances from the center, but this result is not the expected logarithmic form of 2-D solutions to LaPlace's equation (the stretched drumhead). This surprise has a natural explanation and is confirmed experimentally with Spandex as the medium, and its consequences for general rubber sheet models are pursued. ^1``The shape of `the Spandex' and orbits upon its surface'', American Journal of Physics, 70, 48-52 (2002), G. D. White and M. Walker. See also the comment by Don S. Lemons and T. C. Lipscombe, also in AJP, 70, 1056-1058 (2002).

  10. Quantitative genetics model as the unifying model for defining genomic relationship and inbreeding coefficient.

    PubMed

    Wang, Chunkao; Da, Yang

    2014-01-01

    The traditional quantitative genetics model was used as the unifying approach to derive six existing and new definitions of genomic additive and dominance relationships. The theoretical differences of these definitions were in the assumptions of equal SNP effects (equivalent to across-SNP standardization), equal SNP variances (equivalent to within-SNP standardization), and expected or sample SNP additive and dominance variances. The six definitions of genomic additive and dominance relationships on average were consistent with the pedigree relationships, but had individual genomic specificity and large variations not observed from pedigree relationships. These large variations may allow finding least related genomes even within the same family for minimizing genomic relatedness among breeding individuals. The six definitions of genomic relationships generally had similar numerical results in genomic best linear unbiased predictions of additive effects (GBLUP) and similar genomic REML (GREML) estimates of additive heritability. Predicted SNP dominance effects and GREML estimates of dominance heritability were similar within definitions assuming equal SNP effects or within definitions assuming equal SNP variance, but had differences between these two groups of definitions. We proposed a new measure of genomic inbreeding coefficient based on parental genomic co-ancestry coefficient and genomic additive correlation as a genomic approach for predicting offspring inbreeding level. This genomic inbreeding coefficient had the highest correlation with pedigree inbreeding coefficient among the four methods evaluated for calculating genomic inbreeding coefficient in a Holstein sample and a swine sample.

  11. Quantitative Modeling of the Alternative Pathway of the Complement System

    PubMed Central

    Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection. PMID

  12. A study about the existence of the leverage effect in stochastic volatility models

    NASA Astrophysics Data System (ADS)

    Florescu, Ionuţ; Pãsãricã, Cristian Gabriel

    2009-02-01

    The empirical relationship between the return of an asset and the volatility of the asset has been well documented in the financial literature. Named the leverage effect or sometimes risk-premium effect, it is observed in real data that, when the return of the asset decreases, the volatility increases and vice versa. Consequently, it is important to demonstrate that any formulated model for the asset price is capable of generating this effect observed in practice. Furthermore, we need to understand the conditions on the parameters present in the model that guarantee the apparition of the leverage effect. In this paper we analyze two general specifications of stochastic volatility models and their capability of generating the perceived leverage effect. We derive conditions for the apparition of leverage effect in both of these stochastic volatility models. We exemplify using stochastic volatility models used in practice and we explicitly state the conditions for the existence of the leverage effect in these examples.

  13. Modelling bacterial growth in quantitative microbiological risk assessment: is it possible?

    PubMed

    Nauta, Maarten J

    2002-03-01

    Quantitative microbiological risk assessment (QMRA), predictive modelling and HACCP may be used as tools to increase food safety and can be integrated fruitfully for many purposes. However, when QMRA is applied for public health issues like the evaluation of the status of public health, existing predictive models may not be suited to model bacterial growth. In this context, precise quantification of risks is more important than in the context of food manufacturing alone. In this paper, the modular process risk model (MPRM) is briefly introduced as a QMRA modelling framework. This framework can be used to model the transmission of pathogens through any food pathway, by assigning one of six basic processes (modules) to each of the processing steps. Bacterial growth is one of these basic processes. For QMRA, models of bacterial growth need to be expressed in terms of probability, for example to predict the probability that a critical concentration is reached within a certain amount of time. In contrast, available predictive models are developed and validated to produce point estimates of population sizes and therefore do not fit with this requirement. Recent experience from a European risk assessment project is discussed to illustrate some of the problems that may arise when predictive growth models are used in QMRA. It is suggested that a new type of predictive models needs to be developed that incorporates modelling of variability and uncertainty in growth.

  14. Mixed quantitative/qualitative modeling and simulation of the cardiovascular system.

    PubMed

    Nebot, A; Cellier, F E; Vallverdú, M

    1998-02-01

    The cardiovascular system is composed of the hemodynamical system and the central nervous system (CNS) control. Whereas the structure and functioning of the hemodynamical system are well known and a number of quantitative models have already been developed that capture the behavior of the hemodynamical system fairly accurately, the CNS control is, at present, still not completely understood and no good deductive models exist that are able to describe the CNS control from physical and physiological principles. The use of qualitative methodologies may offer an interesting alternative to quantitative modeling approaches for inductively capturing the behavior of the CNS control. In this paper, a qualitative model of the CNS control of the cardiovascular system is developed by means of the fuzzy inductive reasoning (FIR) methodology. FIR is a fairly new modeling technique that is based on the general system problem solving (GSPS) methodology developed by G.J. Klir (Architecture of Systems Problem Solving, Plenum Press, New York, 1985). Previous investigations have demonstrated the applicability of this approach to modeling and simulating systems, the structure of which is partially or totally unknown. In this paper, five separate controller models for different control actuations are described that have been identified independently using the FIR methodology. Then the loop between the hemodynamical system, modeled by means of differential equations, and the CNS control, modeled in terms of five FIR models, is closed, in order to study the behavior of the cardiovascular system as a whole. The model described in this paper has been validated for a single patient only.

  15. Quantitative multiphase model for hydrothermal liquefaction of algal biomass

    SciTech Connect

    Li, Yalin; Leow, Shijie; Fedders, Anna C.; Sharma, Brajendra K.; Guest, Jeremy S.; Strathmann, Timothy J.

    2017-01-01

    Optimized incorporation of hydrothermal liquefaction (HTL, reaction in water at elevated temperature and pressure) within an integrated biorefinery requires accurate models to predict the quantity and quality of all HTL products. Existing models primarily focus on biocrude product yields with limited consideration for biocrude quality and aqueous, gas, and biochar co-products, and have not been validated with an extensive collection of feedstocks. In this study, HTL experiments (300 degrees C, 30 min) were conducted using 24 different batches of microalgae feedstocks with distinctive feedstock properties, which resulted in a wide range of biocrude (21.3-54.3 dry weight basis, dw%), aqueous (4.6-31.2 dw%), gas (7.1-35.6 dw%), and biochar (1.3-35.0 dw%) yields.

  16. Normal fault growth above pre-existing structures: insights from discrete element modelling

    NASA Astrophysics Data System (ADS)

    Wrona, Thilo; Finch, Emma; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Phillips, Thomas

    2016-04-01

    In extensional systems, pre-existing structures such as shear zones may affect the growth, geometry and location of normal faults. Recent seismic reflection-based observations from the North Sea suggest that shear zones not only localise deformation in the host rock, but also in the overlying sedimentary succession. While pre-existing weaknesses are known to localise deformation in the host rock, their effect on deformation in the overlying succession is less well understood. Here, we use 3-D discrete element modelling to determine if and how kilometre-scale shear zones affect normal fault growth in the overlying succession. Discrete element models use a large number of interacting particles to describe the dynamic evolution of complex systems. The technique has therefore been applied to describe fault and fracture growth in a variety of geological settings. We model normal faulting by extending a 60×60×30 km crustal rift-basin model including brittle and ductile interactions and gravitation and isostatic forces by 30%. An inclined plane of weakness which represents a pre-existing shear zone is introduced in the lower section of the upper brittle layer at the start of the experiment. The length, width, orientation and dip of the weak zone are systematically varied between experiments to test how these parameters control the geometric and kinematic development of overlying normal fault systems. Consistent with our seismic reflection-based observations, our results show that strain is indeed localised in and above these weak zones. In the lower brittle layer, normal faults nucleate, as expected, within the zone of weakness and control the initiation and propagation of neighbouring faults. Above this, normal faults nucleate throughout the overlying strata where their orientations are strongly influenced by the underlying zone of weakness. These results challenge the notion that overburden normal faults simply form due to reactivation and upwards propagation of pre-existing

  17. An overview of existing modeling tools making use of model checking in the analysis of biochemical networks

    PubMed Central

    Carrillo, Miguel; Góngora, Pedro A.; Rosenblueth, David A.

    2012-01-01

    Model checking is a well-established technique for automatically verifying complex systems. Recently, model checkers have appeared in computer tools for the analysis of biochemical (and gene regulatory) networks. We survey several such tools to assess the potential of model checking in computational biology. Next, our overview focuses on direct applications of existing model checkers, as well as on algorithms for biochemical network analysis influenced by model checking, such as those using binary decision diagrams (BDDs) or Boolean-satisfiability solvers. We conclude with advantages and drawbacks of model checking for the analysis of biochemical networks. PMID:22833747

  18. Quantitative Model of microRNA-mRNA interaction

    NASA Astrophysics Data System (ADS)

    Noorbakhsh, Javad; Lang, Alex; Mehta, Pankaj

    2012-02-01

    MicroRNAs are short RNA sequences that regulate gene expression and protein translation by binding to mRNA. Experimental data reveals the existence of a threshold linear output of protein based on the expression level of microRNA. To understand this behavior, we propose a mathematical model of the chemical kinetics of the interaction between mRNA and microRNA. Using this model we have been able to quantify the threshold linear behavior. Furthermore, we have studied the effect of internal noise, showing the existence of an intermediary regime where the expression level of mRNA and microRNA has the same order of magnitude. In this crossover regime the mRNA translation becomes sensitive to small changes in the level of microRNA, resulting in large fluctuations in protein levels. Our work shows that chemical kinetics parameters can be quantified by studying protein fluctuations. In the future, studying protein levels and their fluctuations can provide a powerful tool to study the competing endogenous RNA hypothesis (ceRNA), in which mRNA crosstalk occurs due to competition over a limited pool of microRNAs.

  19. Scalar conservation laws with moving constraints arising in traffic flow modeling: An existence result

    NASA Astrophysics Data System (ADS)

    Delle Monache, M. L.; Goatin, P.

    2014-12-01

    We consider a strongly coupled PDE-ODE system that describes the influence of a slow and large vehicle on road traffic. The model consists of a scalar conservation law accounting for the main traffic evolution, while the trajectory of the slower vehicle is given by an ODE depending on the downstream traffic density. The moving constraint is expressed by an inequality on the flux, which models the bottleneck created in the road by the presence of the slower vehicle. We prove the existence of solutions to the Cauchy problem for initial data of bounded variation.

  20. A short time existence/uniqueness result for a nonlocal topology-preserving segmentation model

    NASA Astrophysics Data System (ADS)

    Forcadel, Nicolas; Le Guyader, Carole

    Motivated by a prior applied work of Vese and the second author dedicated to segmentation under topological constraints, we derive a slightly modified model phrased as a functional minimization problem, and propose to study it from a theoretical viewpoint. The mathematical model leads to a second order nonlinear PDE with a singularity at Du=0 and containing a nonlocal term. A suitable setting is thus the one of the viscosity solution theory and, in this framework, we establish a short time existence/uniqueness result as well as a Lipschitz regularity result for the solution.

  1. Existing General Population Models Inaccurately Predict Lung Cancer Risk in Patients Referred for Surgical Evaluation

    PubMed Central

    Isbell, James M.; Deppen, Stephen; Putnam, Joe B.; Nesbitt, Jonathan C.; Lambright, Eric S.; Dawes, Aaron; Massion, Pierre P.; Speroff, Theodore; Jones, David R.; Grogan, Eric L.

    2013-01-01

    Background atients undergoing resections for suspicious pulmonary lesions have a 9-55% benign rate. Validated prediction models exist to estimate the probability of malignancy in a general population and current practice guidelines recommend their use. We evaluated these models in a surgical population to determine the accuracy of existing models to predict benign or malignant disease. Methods We conducted a retrospective review of our thoracic surgery quality improvement database (2005-2008) to identify patients who underwent resection of a pulmonary lesion. Patients were stratified into subgroups based on age, smoking status and fluorodeoxyglucose positron emission tomography (PET) results. The probability of malignancy was calculated for each patient using the Mayo and SPN prediction models. Receiver operating characteristic (ROC) and calibration curves were used to measure model performance. Results 89 patients met selection criteria; 73% were malignant. Patients with preoperative PET scans were divided into 4 subgroups based on age, smoking history and nodule PET avidity. Older smokers with PET-avid lesions had a 90% malignancy rate. Patients with PET- non-avid lesions, or PET-avid lesions with age<50 years or never smokers of any age had a 62% malignancy rate. The area under the ROC curve for the Mayo and SPN models was 0.79 and 0.80, respectively; however, the models were poorly calibrated (p<0.001). Conclusions Despite improvements in diagnostic and imaging techniques, current general population models do not accurately predict lung cancer among patients ref erred for surgical evaluation. Prediction models with greater accuracy are needed to identify patients with benign disease to reduce non-therapeutic resections. PMID:21172518

  2. A poultry-processing model for quantitative microbiological risk assessment.

    PubMed

    Nauta, Maarten; van der Fels-Klerx, Ine; Havelaar, Arie

    2005-02-01

    A poultry-processing model for a quantitative microbiological risk assessment (QMRA) of campylobacter is presented, which can also be applied to other QMRAs involving poultry processing. The same basic model is applied in each consecutive stage of industrial processing. It describes the effects of inactivation and removal of the bacteria, and the dynamics of cross-contamination in terms of the transfer of campylobacter from the intestines to the carcass surface and the environment, from the carcasses to the environment, and from the environment to the carcasses. From the model it can be derived that, in general, the effect of inactivation and removal is dominant for those carcasses with high initial bacterial loads, and cross-contamination is dominant for those with low initial levels. In other QMRA poultry-processing models, the input-output relationship between the numbers of bacteria on the carcasses is usually assumed to be linear on a logarithmic scale. By including some basic mechanistics, it is shown that this may not be realistic. As nonlinear behavior may affect the predicted effects of risk mitigations; this finding is relevant for risk management. Good knowledge of the variability of bacterial loads on poultry entering the process is important. The common practice in microbiology to only present geometric mean of bacterial counts is insufficient: arithmetic mean are more suitable, in particular, to describe the effect of cross-contamination. The effects of logistic slaughter (scheduled processing) as a risk mitigation strategy are predicted to be small. Some additional complications in applying microbiological data obtained in processing plants are discussed.

  3. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd.

  4. A review of existing models and methods to estimate employment effects of pollution control policies

    SciTech Connect

    Darwin, R.F.; Nesse, R.J.

    1988-02-01

    The purpose of this paper is to provide information about existing models and methods used to estimate coal mining employment impacts of pollution control policies. The EPA is currently assessing the consequences of various alternative policies to reduce air pollution. One important potential consequence of these policies is that coal mining employment may decline or shift from low-sulfur to high-sulfur coal producing regions. The EPA requires models that can estimate the magnitude and cost of these employment changes at the local level. This paper contains descriptions and evaluations of three models and methods currently used to estimate the size and cost of coal mining employment changes. The first model reviewed is the Coal and Electric Utilities Model (CEUM), a well established, general purpose model that has been used by the EPA and other groups to simulate air pollution control policies. The second model reviewed is the Advanced Utility Simulation Model (AUSM), which was developed for the EPA specifically to analyze the impacts of air pollution control policies. Finally, the methodology used by Arthur D. Little, Inc. to estimate the costs of alternative air pollution control policies for the Consolidated Coal Company is discussed. These descriptions and evaluations are based on information obtained from published reports and from draft documentation of the models provided by the EPA. 12 refs., 1 fig.

  5. Quantitative comparisons of analogue models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  6. Quantitative phase-field modeling for boiling phenomena

    NASA Astrophysics Data System (ADS)

    Badillo, Arnoldo

    2012-10-01

    A phase-field model is developed for quantitative simulation of bubble growth in the diffusion-controlled regime. The model accounts for phase change and surface tension effects at the liquid-vapor interface of pure substances with large property contrast. The derivation of the model follows a two-fluid approach, where the diffuse interface is assumed to have an internal microstructure, defined by a sharp interface. Despite the fact that phases within the diffuse interface are considered to have their own velocities and pressures, an averaging procedure at the atomic scale, allows for expressing all the constitutive equations in terms of mixture quantities. From the averaging procedure and asymptotic analysis of the model, nonconventional terms appear in the energy and phase-field equations to compensate for the variation of the properties across the diffuse interface. Without these new terms, no convergence towards the sharp-interface model can be attained. The asymptotic analysis also revealed a very small thermal capillary length for real fluids, such as water, that makes impossible for conventional phase-field models to capture bubble growth in the millimeter range size. For instance, important phenomena such as bubble growth and detachment from a hot surface could not be simulated due to the large number of grids points required to resolve all the scales. Since the shape of the liquid-vapor interface is primarily controlled by the effects of an isotropic surface energy (surface tension), a solution involving the elimination of the curvature from the phase-field equation is devised. The elimination of the curvature from the phase-field equation changes the length scale dominating the phase change from the thermal capillary length to the thickness of the thermal boundary layer, which is several orders of magnitude larger. A detailed analysis of the phase-field equation revealed that a split of this equation into two independent parts is possible for system sizes

  7. Global existence and asymptotic stability for a nonlinear integrodifferential equation modeling heat flow

    NASA Astrophysics Data System (ADS)

    Brandon, Deborah

    1989-06-01

    Initial value problems were studied that arise from models for 1-D heat flow (with finite wave speeds) in materials with memory. Under assumptions that ensure compatibility of the constitutive relations with the second law of thermodynamics, the resulting integrodifferential equation is hyperbolic near equilibrium. The existence is established of unique, global (in time) defined, classical solutions to the problems under consideration, provided the data are smooth and sufficiently close to equilibrium. Both Dirichlet and Neumann boundary conditions are treated as well as the problem on the entire real line. Local existence is proved using a contraction mapping argument which involves estimates for linear hyperbolic PDE's with variable coefficients. Global existence is obtained by deriving a priori energy estimates. These estimates are based on inequalities for strongly positive Volterra kernels (including a new inequality that is needed due to the form of the constitutive relations). Furthermore, compatibility with the second law plays an essential role in the proof in order to obtain an existence result under less restrictive assumptions on the data.

  8. Evaluation Between Existing and Improved CCF Modeling Using the NRC SPAR Models

    SciTech Connect

    James K. Knudsen

    2010-06-01

    Abstract: The NRC SPAR models currently employ the alpha factor common cause failure (CCF) methodology and model CCF for a group of redundant components as a single “rolled-up” basic event. These SPAR models will be updated to employ a more computationally intensive and accurate approach by expanding the CCF basic events for all active components to include all terms that appear in the Basic Parameter Model (BPM). A discussion is provided to detail the differences between the rolled-up common cause group (CCG) and expanded BPM adjustment concepts based on differences in core damage frequency and individual component importance measures. Lastly, a hypothetical condition is evaluated with a SPAR model to show the difference in results between the current adjustment method (rolled-up CCF events) and the newer method employing all of the expanded terms in the BPM. The event evaluation on the SPAR model employing the expanded terms will be solved using the graphical evaluation module (GEM) and the proposed method discussed in Reference 1.

  9. Quantitative property-structural relation modeling on polymeric dielectric materials

    NASA Astrophysics Data System (ADS)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  10. Towards real-time change detection in videos based on existing 3D models

    NASA Astrophysics Data System (ADS)

    Ruf, Boitumelo; Schuchert, Tobias

    2016-10-01

    Image based change detection is of great importance for security applications, such as surveillance and reconnaissance, in order to find new, modified or removed objects. Such change detection can generally be performed by co-registration and comparison of two or more images. However, existing 3d objects, such as buildings, may lead to parallax artifacts in case of inaccurate or missing 3d information, which may distort the results in the image comparison process, especially when the images are acquired from aerial platforms like small unmanned aerial vehicles (UAVs). Furthermore, considering only intensity information may lead to failures in detection of changes in the 3d structure of objects. To overcome this problem, we present an approach that uses Structure-from-Motion (SfM) to compute depth information, with which a 3d change detection can be performed against an existing 3d model. Our approach is capable of the change detection in real-time. We use the input frames with the corresponding camera poses to compute dense depth maps by an image-based depth estimation algorithm. Additionally we synthesize a second set of depth maps, by rendering the existing 3d model from the same camera poses as those of the image-based depth map. The actual change detection is performed by comparing the two sets of depth maps with each other. Our method is evaluated on synthetic test data with corresponding ground truth as well as on real image test data.

  11. Numerical Modelling of Extended Leak-Off Test with a Pre-Existing Fracture

    NASA Astrophysics Data System (ADS)

    Lavrov, A.; Larsen, I.; Bauer, A.

    2016-04-01

    Extended leak-off test (XLOT) is one of the few techniques available for stress measurements in oil and gas wells. Interpretation of the test is often difficult since the results depend on a multitude of factors, including the presence of natural or drilling-induced fractures in the near-well area. Coupled numerical modelling of XLOT has been performed to investigate the pressure behaviour during the flowback phase as well as the effect of a pre-existing fracture on the test results in a low-permeability formation. Essential features of XLOT known from field measurements are captured by the model, including the saw-tooth shape of the pressure vs injected volume curve, and the change of slope in the pressure vs time curve during flowback used by operators as an indicator of the bottomhole pressure reaching the minimum in situ stress. Simulations with a pre-existing fracture running from the borehole wall in the radial direction have revealed that the results of XLOT are quite sensitive to the orientation of the pre-existing fracture. In particular, the fracture initiation pressure and the formation breakdown pressure increase steadily with decreasing angle between the fracture and the minimum in situ stress. Our findings seem to invalidate the use of the fracture initiation pressure and the formation breakdown pressure for stress measurements or rock strength evaluation purposes.

  12. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments.

  13. Quantitative phase-field modeling for wetting phenomena.

    PubMed

    Badillo, Arnoldo

    2015-03-01

    A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates.

  14. Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, I. I.; Turaev, D. V.

    2017-01-01

    We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.

  15. Local Existence of Weak Solutions to Kinetic Models of Granular Media

    NASA Astrophysics Data System (ADS)

    Agueh, Martial

    2016-08-01

    We prove in any dimension {d ≥q 1} a local in time existence of weak solutions to the Cauchy problem for the kinetic equation of granular media, partial_t f+v\\cdot nabla_x f = {div}_v[f(nabla W *_v f)] when the initial data are nonnegative, integrable and bounded functions with compact support in velocity, and the interaction potential {W} is a {C^2({{R}}^d)} radially symmetric convex function. Our proof is constructive and relies on a splitting argument in position and velocity, where the spatially homogeneous equation is interpreted as the gradient flow of a convex interaction energy with respect to the quadratic Wasserstein distance. Our result generalizes the local existence result obtained by Benedetto et al. (RAIRO Modél Math Anal Numér 31(5):615-641, 1997) on the one-dimensional model of this equation for a cubic power-law interaction potential.

  16. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... COMMISSION NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection... issued for public comment a document entitled: NUREG/CR-XXXX, ``Development of Quantitative Software... ``Review of Quantitative Software Reliability Methods,'' BNL- 94047-2010 (ADAMS Accession No....

  17. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.

    PubMed

    Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-04-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes.

  18. Using Existing Arctic Atmospheric Mercury Measurements to Refine Global and Regional Scale Atmospheric Transport Models

    NASA Astrophysics Data System (ADS)

    Moore, C. W.; Dastoor, A.; Steffen, A.; Nghiem, S. V.; Agnan, Y.; Obrist, D.

    2015-12-01

    Northern hemisphere background atmospheric concentrations of gaseous elemental mercury (GEM) have been declining by up to 25% over the last ten years at some lower latitude sites. However, this decline has ranged from no decline to 9% over 10 years at Arctic long-term measurement sites. Measurements also show a highly dynamic nature of mercury (Hg) species in Arctic air and snow from early spring to the end of summer when biogeochemical transformations peak. Currently, models are unable to reproduce this variability accurately. Estimates of Hg accumulation in the Arctic and Arctic Ocean by models require a full mechanistic understanding of the multi-phase redox chemistry of Hg in air and snow as well as the role of meteorology in the physicochemical processes of Hg. We will show how findings from ground-based atmospheric Hg measurements like those made in spring 2012 during the Bromine, Ozone and Mercury Experiment (BROMEX) near Barrow, Alaska can be used to reduce the discrepancy between measurements and model output in the Canadian GEM-MACH-Hg model. The model is able to reproduce and to explain some of the variability in Arctic Hg measurements but discrepancies still remain. One improvement involves incorporation of new physical mechanisms such as the one we were able to identify during BROMEX. This mechanism, by which atmospheric mercury depletion events are abruptly ended via sea ice leads opening and inducing shallow convective mixing that replenishes GEM (and ozone) in the near surface atmospheric layer, causing an immediate recovery from the depletion event, is currently lacking in models. Future implementation of this physical mechanism will have to incorporate current remote sensing sea ice products but also rely on the development of products that can identify sea ice leads quantitatively. In this way, we can advance the knowledge of the dynamic nature of GEM in the Arctic and the impact of climate change along with new regulations on the overall

  19. A quantitative speciation model for the adsorption of organic pollutants on activated carbon.

    PubMed

    Grivé, M; García, D; Domènech, C; Richard, L; Rojo, I; Martínez, X; Rovira, M

    2013-01-01

    Granular activated carbon (GAC) is commonly used as adsorbent in water treatment plants given its high capacity for retaining organic pollutants in aqueous phase. The current knowledge on GAC behaviour is essentially empirical, and no quantitative description of the chemical relationships between GAC surface groups and pollutants has been proposed. In this paper, we describe a quantitative model for the adsorption of atrazine onto GAC surface. The model is based on results of potentiometric titrations and three types of adsorption experiments which have been carried out in order to determine the nature and distribution of the functional groups on the GAC surface, and evaluate the adsorption characteristics of GAC towards atrazine. Potentiometric titrations have indicated the existence of at least two different families of chemical groups on the GAC surface, including phenolic- and benzoic-type surface groups. Adsorption experiments with atrazine have been satisfactorily modelled with the geochemical code PhreeqC, assuming that atrazine is sorbed onto the GAC surface in equilibrium (log Ks = 5.1 ± 0.5). Independent thermodynamic calculations suggest a possible adsorption of atrazine on a benzoic derivative. The present work opens a new approach for improving the adsorption capabilities of GAC towards organic pollutants by modifying its chemical properties.

  20. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-01-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. PMID:26058849

  1. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies.

  2. Development of an Experimental Model of Diabetes Co-Existing with Metabolic Syndrome in Rats

    PubMed Central

    Suman, Rajesh Kumar; Ray Mohanty, Ipseeta; Borde, Manjusha K.; Maheshwari, Ujwala; Deshmukh, Y. A.

    2016-01-01

    Background. The incidence of metabolic syndrome co-existing with diabetes mellitus is on the rise globally. Objective. The present study was designed to develop a unique animal model that will mimic the pathological features seen in individuals with diabetes and metabolic syndrome, suitable for pharmacological screening of drugs. Materials and Methods. A combination of High-Fat Diet (HFD) and low dose of streptozotocin (STZ) at 30, 35, and 40 mg/kg was used to induce metabolic syndrome in the setting of diabetes mellitus in Wistar rats. Results. The 40 mg/kg STZ produced sustained hyperglycemia and the dose was thus selected for the study to induce diabetes mellitus. Various components of metabolic syndrome such as dyslipidemia {(increased triglyceride, total cholesterol, LDL cholesterol, and decreased HDL cholesterol)}, diabetes mellitus (blood glucose, HbA1c, serum insulin, and C-peptide), and hypertension {systolic blood pressure} were mimicked in the developed model of metabolic syndrome co-existing with diabetes mellitus. In addition to significant cardiac injury, atherogenic index, inflammation (hs-CRP), decline in hepatic and renal function were observed in the HF-DC group when compared to NC group rats. The histopathological assessment confirmed presence of edema, necrosis, and inflammation in heart, pancreas, liver, and kidney of HF-DC group as compared to NC. Conclusion. The present study has developed a unique rodent model of metabolic syndrome, with diabetes as an essential component. PMID:26880906

  3. Existence and qualitative properties of travelling waves for an epidemiological model with mutations

    NASA Astrophysics Data System (ADS)

    Griette, Quentin; Raoul, Gaël

    2016-05-01

    In this article, we are interested in a non-monotonic system of logistic reaction-diffusion equations. This system of equations models an epidemic where two types of pathogens are competing, and a mutation can change one type into the other with a certain rate. We show the existence of travelling waves with minimal speed, which are usually non-monotonic. Then we provide a description of the shape of those constructed travelling waves, and relate them to some Fisher-KPP fronts with non-minimal speed.

  4. Existence of a line of critical points in a two-dimensional Lebwohl Lasher model

    NASA Astrophysics Data System (ADS)

    Shabnam, Sabana; DasGupta, Sudeshna; Roy, Soumen Kumar

    2016-02-01

    Controversy regarding transitions in systems with global symmetry group O(3) has attracted the attention of researchers and the detailed nature of this transition is still not well understood. As an example of such a system in this paper we have studied a two-dimensional Lebwohl Lasher model, using the Wolff cluster algorithm. Though we have not been able to reach any definitive conclusions regarding the order present in the system, from finite size scaling analysis, hyperscaling relations and the behavior of the correlation function we have obtained strong indications regarding the presence of quasi-long range order and the existence of a line of critical points in our system.

  5. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models.

    PubMed

    Shimayoshi, Takao; Cha, Chae Young; Amano, Akira

    2015-01-01

    Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics.

  6. Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy

    ERIC Educational Resources Information Center

    Smith, Rachel; Cantrell, Kevin

    2007-01-01

    Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.

  7. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data

    PubMed Central

    Gritsenko, Alexey A.; Hulsman, Marc; Reinders, Marcel J. T.; de Ridder, Dick

    2015-01-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates. PMID:26275099

  8. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.

    PubMed

    Gritsenko, Alexey A; Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick

    2015-08-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.

  9. An existence result for a model of complete damage in elastic materials with reversible evolution

    NASA Astrophysics Data System (ADS)

    Bonetti, Elena; Freddi, Francesco; Segatti, Antonio

    2017-01-01

    In this paper, we consider a model describing evolution of damage in elastic materials, in which stiffness completely degenerates once the material is fully damaged. The model is written by using a phase transition approach, with respect to the damage parameter. In particular, a source of damage is represented by a quadratic form involving deformations, which vanishes in the case of complete damage. Hence, an internal constraint is ensured by a maximal monotone operator. The evolution of damage is considered "reversible", in the sense that the material may repair itself. We can prove an existence result for a suitable weak formulation of the problem, rewritten in terms of a new variable (an internal stress). Some numerical simulations are presented in agreement with the mathematical analysis of the system.

  10. Model based prediction of the existence of the spontaneous cochlear microphonic

    NASA Astrophysics Data System (ADS)

    Ayat, Mohammad; Teal, Paul D.

    2015-12-01

    In the mammalian cochlea, self-sustaining oscillation of the basilar membrane in the cochlea can cause vibration of the ear drum, and produce spontaneous narrow-band air pressure fluctuations in the ear canal. These spontaneous fluctuations are known as spontaneous otoacoustic emissions. Small perturbations in feedback gain of the cochlear amplifier have been proposed to be the generation source of self-sustaining oscillations of the basilar membrane. We hypothesise that the self-sustaining oscillation resulting from small perturbations in feedback gain produce spontaneous potentials in the cochlea. We demonstrate that according to the results of the model, a measurable spontaneous cochlear microphonic must exist in the human cochlea. The existence of this signal has not yet been reported. However, this spontaneous electrical signal could play an important role in auditory research. Successful or unsuccessful recording of this signal will indicate whether previous hypotheses about the generation source of spontaneous otoacoustic emissions are valid or should be amended. In addition according to the proposed model spontaneous cochlear microphonic is basically an electrical analogue of spontaneous otoacoustic emissions. In certain experiments, spontaneous cochlear microphonic may be more easily detected near its generation site with proper electrical instrumentation than is spontaneous otoacoustic emission.

  11. Herd immunity and pneumococcal conjugate vaccine: a quantitative model.

    PubMed

    Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S

    2007-07-20

    Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non

  12. Endoscopic skull base training using 3D printed models with pre-existing pathology.

    PubMed

    Narayanan, Vairavan; Narayanan, Prepageran; Rajagopalan, Raman; Karuppiah, Ravindran; Rahman, Zainal Ariff Abdul; Wormald, Peter-John; Van Hasselt, Charles Andrew; Waran, Vicknes

    2015-03-01

    Endoscopic base of skull surgery has been growing in acceptance in the recent past due to improvements in visualisation and micro instrumentation as well as the surgical maturing of early endoscopic skull base practitioners. Unfortunately, these demanding procedures have a steep learning curve. A physical simulation that is able to reproduce the complex anatomy of the anterior skull base provides very useful means of learning the necessary skills in a safe and effective environment. This paper aims to assess the ease of learning endoscopic skull base exposure and drilling techniques using an anatomically accurate physical model with a pre-existing pathology (i.e., basilar invagination) created from actual patient data. Five models of a patient with platy-basia and basilar invagination were created from the original MRI and CT imaging data of a patient. The models were used as part of a training workshop for ENT surgeons with varying degrees of experience in endoscopic base of skull surgery, from trainees to experienced consultants. The surgeons were given a list of key steps to achieve in exposing and drilling the skull base using the simulation model. They were then asked to list the level of difficulty of learning these steps using the model. The participants found the models suitable for learning registration, navigation and skull base drilling techniques. All participants also found the deep structures to be accurately represented spatially as confirmed by the navigation system. These models allow structured simulation to be conducted in a workshop environment where surgeons and trainees can practice to perform complex procedures in a controlled fashion under the supervision of experts.

  13. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    PubMed

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards.

  14. A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model

    DTIC Science & Technology

    2007-06-01

    12TH ICCRTS “Adapting C2 to the 21st Century” A Quantitative Model-Driven Comparison of Command Approaches in an Adversarial Process Model Tracks...Lenahan2 identified metrics and techniques for adversarial C2 process modeling . We intend to further that work by developing a set of adversarial process ...Approaches in an Adversarial Process Model 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK

  15. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  16. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    PubMed

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted.

  17. Using Existing Coastal Models To Address Ocean Acidification Modeling Needs: An Inside Look at Several East and Gulf Coast Regions

    NASA Astrophysics Data System (ADS)

    Jewett, E.

    2013-12-01

    Ecosystem forecast models have been in development for many US coastal regions for decades in an effort to understand how certain drivers, such as nutrients, freshwater and sediments, affect coastal water quality. These models have been used to inform coastal management interventions such as imposition of total maximum daily load allowances for nutrients or sediments to control hypoxia, harmful algal blooms and/or water clarity. Given the overlap of coastal acidification with hypoxia, it seems plausible that the geochemical models built to explain hypoxia and/or HABs might also be used, with additional terms, to understand how atmospheric CO2 is interacting with local biogeochemical processes to affect coastal waters. Examples of existing biogeochemical models from Galveston, the northern Gulf of Mexico, Tampa Bay, West Florida Shelf, Pamlico Sound, Chesapeake Bay, and Narragansett Bay will be presented and explored for suitability for ocean acidification modeling purposes.

  18. A parametric investigation of an existing supersonic relative tip speed propeller noise model. [turboprop aircraft

    NASA Technical Reports Server (NTRS)

    Dittmar, J. H.

    1977-01-01

    A high tip speed turboprop is being considered as a future energy conservative airplane. The high tip speed of the propeller combined with the cruise speed of the airplane may result in supersonic relative flow on the propeller tips. These supersonic blade sections could generate noise that is a cabin environment problem. An existing supersonic propeller noise model was parametrically investigated to identify and evaluate the noise reduction variables. Both independent and interdependent parameter variations (constant propeller thrust) were performed. The noise reductions indicated by the independent investigation varied from sizable in the case of reducing Mach number to minimal for adjusting the thickness and loading distributions. The noise reduction possibilities of decreasing relative Mach number were further investigated during the interdependent variations. The interdependent investigation indicated that significant noise reductions could be achieved by increasing the propeller diameter and/or increasing the number of propeller blades while maintaining a constant propeller thrust.

  19. Frequency domain modeling and dynamic characteristics evaluation of existing wind turbine systems

    NASA Astrophysics Data System (ADS)

    Chiang, Chih-Hung; Yu, Chih-Peng

    2016-04-01

    It is quite well accepted that frequency domain procedures are suitable for the design and dynamic analysis of wind turbine structures, especially for floating offshore wind turbines, since random wind loads and wave induced motions are most likely simulated in the frequency domain. This paper presents specific applications of an effective frequency domain scheme to the linear analysis of wind turbine structures in which a 1-D spectral element was developed based on the axially-loaded member. The solution schemes are summarized for the spectral analyses of the tower, the blades, and the combined system with selected frequency-dependent coupling effect from foundation-structure interactions. Numerical examples demonstrate that the modal frequencies obtained using spectral-element models are in good agreement with those found in the literature. A 5-element mono-pile model results in less than 0.3% deviation from an existing 160-element model. It is preliminarily concluded that the proposed scheme is relatively efficient in performing quick verification for test data obtained from the on-site vibration measurement using the microwave interferometer.

  20. Functional coverage of the human genome by existing structures, structural genomics targets, and homology models.

    PubMed

    Xie, Lei; Bourne, Philip E

    2005-08-01

    The bias in protein structure and function space resulting from experimental limitations and targeting of particular functional classes of proteins by structural biologists has long been recognized, but never continuously quantified. Using the Enzyme Commission and the Gene Ontology classifications as a reference frame, and integrating structure data from the Protein Data Bank (PDB), target sequences from the structural genomics projects, structure homology derived from the SUPERFAMILY database, and genome annotations from Ensembl and NCBI, we provide a quantified view, both at the domain and whole-protein levels, of the current and projected coverage of protein structure and function space relative to the human genome. Protein structures currently provide at least one domain that covers 37% of the functional classes identified in the genome; whole structure coverage exists for 25% of the genome. If all the structural genomics targets were solved (twice the current number of structures in the PDB), it is estimated that structures of one domain would cover 69% of the functional classes identified and complete structure coverage would be 44%. Homology models from existing experimental structures extend the 37% coverage to 56% of the genome as single domains and 25% to 31% for complete structures. Coverage from homology models is not evenly distributed by protein family, reflecting differing degrees of sequence and structure divergence within families. While these data provide coverage, conversely, they also systematically highlight functional classes of proteins for which structures should be determined. Current key functional families without structure representation are highlighted here; updated information on the "most wanted list" that should be solved is available on a weekly basis from http://function.rcsb.org:8080/pdb/function_distribution/index.html.

  1. What Are We Doing When We Translate from Quantitative Models?

    ERIC Educational Resources Information Center

    Critchfield, Thomas S.; Reed, Derek D.

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may…

  2. Existence and uniqueness of endemic states for the age-structured S-I-R epidemic model.

    PubMed

    Cha, Y; Iannelli, M; Milner, F A

    1998-06-15

    The existence and uniqueness of positive steady states for the age structured S-I-R epidemic model with intercohort transmission is considered. Threshold results for the existence of endemic states are established for most cases. Uniqueness is shown in each case. Threshold used are explicitly computable in terms of demographic and epidemiological parameters of the model.

  3. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  4. Combinatorial modeling of chromatin features quantitatively predicts DNA replication timing in Drosophila.

    PubMed

    Comoglio, Federico; Paro, Renato

    2014-01-01

    In metazoans, each cell type follows a characteristic, spatio-temporally regulated DNA replication program. Histone modifications (HMs) and chromatin binding proteins (CBPs) are fundamental for a faithful progression and completion of this process. However, no individual HM is strictly indispensable for origin function, suggesting that HMs may act combinatorially in analogy to the histone code hypothesis for transcriptional regulation. In contrast to gene expression however, the relationship between combinations of chromatin features and DNA replication timing has not yet been demonstrated. Here, by exploiting a comprehensive data collection consisting of 95 CBPs and HMs we investigated their combinatorial potential for the prediction of DNA replication timing in Drosophila using quantitative statistical models. We found that while combinations of CBPs exhibit moderate predictive power for replication timing, pairwise interactions between HMs lead to accurate predictions genome-wide that can be locally further improved by CBPs. Independent feature importance and model analyses led us to derive a simplified, biologically interpretable model of the relationship between chromatin landscape and replication timing reaching 80% of the full model accuracy using six model terms. Finally, we show that pairwise combinations of HMs are able to predict differential DNA replication timing across different cell types. All in all, our work provides support to the existence of combinatorial HM patterns for DNA replication and reveal cell-type independent key elements thereof, whose experimental investigation might contribute to elucidate the regulatory mode of this fundamental cellular process.

  5. CytoModeler: a tool for bridging large-scale network analysis and dynamic quantitative modeling

    PubMed Central

    Xia, Tian; Van Hemert, John; Dickerson, Julie A.

    2011-01-01

    Summary: CytoModeler is an open-source Java application based on the Cytoscape platform. It integrates large-scale network analysis and quantitative modeling by combining omics analysis on the Cytoscape platform, access to deterministic and stochastic simulators, and static and dynamic network context visualizations of simulation results. Availability: Implemented in Java, CytoModeler runs with Cytoscape 2.6 and 2.7. Binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv/cytomodeler/. Contact: julied@iastate.edu; netscape@iastate.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21511714

  6. Assessing Quantitative Literacy in Higher Education: An Overview of Existing Research and Assessments with Recommendations for Next-Generation Assessment. Research Report. ETS RR-14-22

    ERIC Educational Resources Information Center

    Roohr, Katrina Crotts; Graf, Edith Aurora; Liu, Ou Lydia

    2014-01-01

    Quantitative literacy has been recognized as an important skill in the higher education and workforce communities, focusing on problem solving, reasoning, and real-world application. As a result, there is a need by various stakeholders in higher education and workforce communities to evaluate whether college students receive sufficient training on…

  7. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models

    PubMed Central

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-01-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. PMID:27591750

  8. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population.

  9. Physically based estimation of soil water retention from textural data: General framework, new models, and streamlined existing models

    USGS Publications Warehouse

    Nimmo, J.R.; Herkelrath, W.N.; Laguna, Luna A.M.

    2007-01-01

    Numerous models are in widespread use for the estimation of soil water retention from more easily measured textural data. Improved models are needed for better prediction and wider applicability. We developed a basic framework from which new and existing models can be derived to facilitate improvements. Starting from the assumption that every particle has a characteristic dimension R associated uniquely with a matric pressure ?? and that the form of the ??-R relation is the defining characteristic of each model, this framework leads to particular models by specification of geometric relationships between pores and particles. Typical assumptions are that particles are spheres, pores are cylinders with volume equal to the associated particle volume times the void ratio, and that the capillary inverse proportionality between radius and matric pressure is valid. Examples include fixed-pore-shape and fixed-pore-length models. We also developed alternative versions of the model of Arya and Paris that eliminate its interval-size dependence and other problems. The alternative models are calculable by direct application of algebraic formulas rather than manipulation of data tables and intermediate results, and they easily combine with other models (e.g., incorporating structural effects) that are formulated on a continuous basis. Additionally, we developed a family of models based on the same pore geometry as the widely used unsaturated hydraulic conductivity model of Mualem. Predictions of measurements for different suitable media show that some of the models provide consistently good results and can be chosen based on ease of calculations and other factors. ?? Soil Science Society of America. All rights reserved.

  10. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model

    PubMed Central

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362

  11. Quantitative, comprehensive, analytical model for magnetic reconnection in Hall magnetohydrodynamics.

    PubMed

    Simakov, Andrei N; Chacón, L

    2008-09-05

    Dissipation-independent, or "fast", magnetic reconnection has been observed computationally in Hall magnetohydrodynamics (MHD) and predicted analytically in electron MHD. However, a quantitative analytical theory of reconnection valid for arbitrary ion inertial lengths, d{i}, has been lacking and is proposed here for the first time. The theory describes a two-dimensional reconnection diffusion region, provides expressions for reconnection rates, and derives a formal criterion for fast reconnection in terms of dissipation parameters and d{i}. It also confirms the electron MHD prediction that both open and elongated diffusion regions allow fast reconnection, and reveals strong dependence of the reconnection rates on d{i}.

  12. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    PubMed

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications.

  13. Dynamics of childhood growth and obesity development and validation of a quantitative mathematical model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative...

  14. Impact assessment of abiotic resources in LCA: quantitative comparison of selected characterization models.

    PubMed

    Rørbech, Jakob T; Vadenbo, Carl; Hellweg, Stefanie; Astrup, Thomas F

    2014-10-07

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247 individual market inventory data sets covering a wide range of societal activities (ecoinvent database v3.0). Log-linear regression analysis was carried out for all pairwise combinations of the 11 methods for identification of correlations in CFs (resources) and total impacts (inventory data sets) between methods. Significant differences in resource coverage were observed (9-73 resources) revealing a trade-off between resource coverage and model complexity. High correlation in CFs between methods did not necessarily manifest in high correlation in total impacts. This indicates that also resource coverage may be critical for impact assessment results. Although no consistent correlations between methods applying similar assessment models could be observed, all methods showed relatively high correlation regarding the assessment of energy resources. Finally, we classify the existing methods into three groups, according to method focus and modeling approach, to aid method selection within LCA.

  15. Thermodynamic Modeling of a Solid Oxide Fuel Cell to Couple with an Existing Gas Turbine Engine Model

    NASA Technical Reports Server (NTRS)

    Brinson, Thomas E.; Kopasakis, George

    2004-01-01

    The Controls and Dynamics Technology Branch at NASA Glenn Research Center are interested in combining a solid oxide fuel cell (SOFC) to operate in conjunction with a gas turbine engine. A detailed engine model currently exists in the Matlab/Simulink environment. The idea is to incorporate a SOFC model within the turbine engine simulation and observe the hybrid system's performance. The fuel cell will be heated to its appropriate operating condition by the engine s combustor. Once the fuel cell is operating at its steady-state temperature, the gas burner will back down slowly until the engine is fully operating on the hot gases exhausted from the SOFC. The SOFC code is based on a steady-state model developed by The U.S. Department of Energy (DOE). In its current form, the DOE SOFC model exists in Microsoft Excel and uses Visual Basics to create an I-V (current-voltage) profile. For the project's application, the main issue with this model is that the gas path flow and fuel flow temperatures are used as input parameters instead of outputs. The objective is to create a SOFC model based on the DOE model that inputs the fuel cells flow rates and outputs temperature of the flow streams; therefore, creating a temperature profile as a function of fuel flow rate. This will be done by applying the First Law of Thermodynamics for a flow system to the fuel cell. Validation of this model will be done in two procedures. First, for a given flow rate the exit stream temperature will be calculated and compared to DOE SOFC temperature as a point comparison. Next, an I-V curve and temperature curve will be generated where the I-V curve will be compared with the DOE SOFC I-V curve. Matching I-V curves will suggest validation of the temperature curve because voltage is a function of temperature. Once the temperature profile is created and validated, the model will then be placed into the turbine engine simulation for system analysis.

  16. Photon-tissue interaction model for quantitative assessment of biological tissues

    NASA Astrophysics Data System (ADS)

    Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann

    2014-02-01

    In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.

  17. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  18. Quantitative analytical model for magnetic reconnection in hall magnetohydrodynamics

    SciTech Connect

    Simakov, Andrei N

    2008-01-01

    Magnetic reconnection is of fundamental importance for laboratory and naturally occurring plasmas. Reconnection usually develops on time scales which are much shorter than those associated with classical collisional dissipation processes, and which are not fully understood. While such dissipation-independent (or 'fast') reconnection rates have been observed in particle and Hall magnetohydrodynamics (MHD) simulations and predicted analytically in electron MHD, a quantitative analytical theory of fast reconnection valid for arbitrary ion inertial lengths d{sub i} has been lacking. Here we propose such a theory without a guide field. The theory describes two-dimensional magnetic field diffusion regions, provides expressions for the reconnection rates, and derives a formal criterion for fast reconnection in terms of dissipation parameters and di. It also demonstrates that both open X-point and elongated diffusion regions allow dissipation-independent reconnection and reveals a possibility of strong dependence of the reconnection rates on d{sub i}.

  19. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  20. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  1. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  2. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  3. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  4. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.

  5. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  6. A Quantitative Causal Model Theory of Conditional Reasoning

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  7. Assessment of Quantitative Precipitation Forecasts from Operational NWP Models (Invited)

    NASA Astrophysics Data System (ADS)

    Sapiano, M. R.

    2010-12-01

    Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.

  8. Quantitative Methods for Comparing Different Polyline Stream Network Models

    SciTech Connect

    Danny L. Anderson; Daniel P. Ames; Ping Yang

    2014-04-01

    Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.

  9. Detection of cardiomyopathy in an animal model using quantitative autoradiography

    SciTech Connect

    Kubota, K.; Som, P.; Oster, Z.H.; Brill, A.B.; Goodman, M.M.; Knapp, F.F. Jr.; Atkins, H.L.; Sole, M.J.

    1988-10-01

    A fatty acid analog (15-p-iodophenyl)-3,3 dimethyl-pentadecanoic acid (DMIPP) was studied in cardiomyopathic (CM) and normal age-matched Syrian hamsters. Dual tracer quantitative wholebody autoradiography (QARG) with DMIPP and 2-(/sup 14/C(U))-2-deoxy-2-fluoro-D-glucose (FDG) or with FDG and /sup 201/Tl enabled comparison of the uptake of a fatty acid and a glucose analog with the blood flow. These comparisons were carried out at the onset and mid-stage of the disease before congestive failure developed. Groups of CM and normal animals were treated with verapamil from the age of 26 days, before the onset of the disease for 41 days. In CM hearts, areas of decreased DMIPP uptake were seen. These areas were much larger than the decrease in uptake of FDG or /sup 201/Tl. In early CM only minimal changes in FDG or /sup 201/Tl uptake were observed as compared to controls. Treatment of CM-prone animals with verapamil prevented any changes in DMIPP, FDG, or /sup 201/Tl uptake. DMIPP seems to be a more sensitive indicator of early cardiomyopathic changes as compared to /sup 201/Tl or FDG. The trial of DMIPP and SPECT in the diagnosis of human disease, as well as for monitoring the effects of drugs which may prevent it seems to be warranted.

  10. A quantitative model of application slow-down in multi-resource shared systems

    SciTech Connect

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.

  11. A quantitative model of application slow-down in multi-resource shared systems

    DOE PAGES

    Lim, Seung-Hwan; Kim, Youngjae

    2016-12-26

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price-resource contention among jobs increases job completion time. In this study, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job ismore » characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We extended the D-factor model to capture the slow-down of applications when multiple identical resources exist such as multi-core environments and multi-disks environments. Finally, validation results of the extended D-factor model with HPC checkpoint applications on the parallel file systems show that D-factor accurately captures the slow down of concurrent applications in such environments.« less

  12. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    PubMed Central

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  13. A quantitative model of plasma in Neptune's magnetosphere

    NASA Astrophysics Data System (ADS)

    Richardson, J. D.

    1993-07-01

    A model encompassing plasma transport and energy processes is applied to Neptune's magnetosphere. Starting with profiles of the neutral densities and the electron temperature, the model calculates the plasma density and ion temperature profiles. Good agreement between model results and observations is obtained for a neutral source of 5 x 10 exp 25/s if the diffusion coefficient is 10 exp -8 L3R(N)/2s, plasma is lost at a rate 1/3 that of the strong diffusion rate, and plasma subcorotates in the region outside Triton.

  14. Efficient Recycled Algorithms for Quantitative Trait Models on Phylogenies

    PubMed Central

    Hiscott, Gordon; Fox, Colin; Parry, Matthew; Bryant, David

    2016-01-01

    We present an efficient and flexible method for computing likelihoods for phenotypic traits on a phylogeny. The method does not resort to Monte Carlo computation but instead blends Felsenstein’s discrete character pruning algorithm with methods for numerical quadrature. It is not limited to Gaussian models and adapts readily to model uncertainty in the observed trait values. We demonstrate the framework by developing efficient algorithms for likelihood calculation and ancestral state reconstruction under Wright’s threshold model, applying our methods to a data set of trait data for extrafloral nectaries across a phylogeny of 839 Fabales species. PMID:27056412

  15. A quantitative model of honey bee colony population dynamics.

    PubMed

    Khoury, David S; Myerscough, Mary R; Barron, Andrew B

    2011-04-18

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.

  16. Quantitative comparisons of numerical models of brittle deformation

    NASA Astrophysics Data System (ADS)

    Buiter, S.

    2009-04-01

    Numerical modelling of brittle deformation in the uppermost crust can be challenging owing to the requirement of an accurate pressure calculation, the ability to achieve post-yield deformation and localisation, and the choice of rheology (plasticity law). One way to approach these issues is to conduct model comparisons that can evaluate the effects of different implementations of brittle behaviour in crustal deformation models. We present a comparison of three brittle shortening experiments for fourteen different numerical codes, which use finite element, finite difference, boundary element and distinct element techniques. Our aim is to constrain and quantify the variability among models in order to improve our understanding of causes leading to differences between model results. Our first experiment of translation of a stable sand-like wedge serves as a reference that allows for testing against analytical solutions (e.g., taper angle, root-mean-square velocity and gravitational rate of work). The next two experiments investigate an unstable wedge in a sandbox-like setup which deforms by inward translation of a mobile wall. All models accommodate shortening by in-sequence formation of forward shear zones. We analyse the location, dip angle and spacing of thrusts in detail as previous comparisons have shown that these can be highly variable in numerical and analogue models of crustal shortening and extension. We find that an accurate implementation of boundary friction is important for our models. Our results are encouraging in the overall agreement in their dynamic evolution, but show at the same time the effort that is needed to understand shear zone evolution. GeoMod2008 Team: Markus Albertz, Michele Cooke, Susan Ellis, Taras Gerya, Luke Hodkinson, Kristin Hughes, Katrin Huhn, Boris Kaus, Walter Landry, Bertrand Maillot, Christophe Pascal, Anton Popov, Guido Schreurs, Christopher Beaumont, Tony Crook, Mario Del Castello and Yves Leroy

  17. Quantitative comparisons of numerical models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne

    2010-05-01

    Numerical and laboratory models are often used to investigate the evolution of deformation processes at various scales in crust and lithosphere. In both approaches, the freedom in choice of simulation method, materials and their properties, and deformation laws could affect model outcomes. To assess the role of modelling method and to quantify the variability among models, we have performed a comparison of laboratory and numerical experiments. Here, we present results of 11 numerical codes, which use finite element, finite difference and distinct element techniques. We present three experiments that describe shortening of a sand-like, brittle wedge. The material properties of the numerical ‘sand', the model set-up and the boundary conditions are strictly prescribed and follow the analogue setup as closely as possible. Our first experiment translates a non-accreting wedge with a stable surface slope of 20 degrees. In agreement with critical wedge theory, all models maintain the same surface slope and do not deform. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge in a sandbox-like setup, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. We show that we successfully simulate sandbox-style brittle behaviour using different numerical modelling techniques and that we obtain the same styles of deformation behaviour in numerical and laboratory experiments at similar levels of variability. The GeoMod2008 Numerical Team: Markus Albertz, Michelle Cooke, Tony Crook, David Egholm, Susan Ellis, Taras Gerya, Luke Hodkinson, Boris Kaus, Walter Landry, Bertrand Maillot, Yury Mishin

  18. Derivation of a quantitative minimal model from a detailed elementary-step mechanism supported by mathematical coupling analysis

    NASA Astrophysics Data System (ADS)

    Shaik, O. S.; Kammerer, J.; Gorecki, J.; Lebiedz, D.

    2005-12-01

    Accurate experimental data increasingly allow the development of detailed elementary-step mechanisms for complex chemical and biochemical reaction systems. Model reduction techniques are widely applied to obtain representations in lower-dimensional phase space which are more suitable for mathematical analysis, efficient numerical simulation, and model-based control tasks. Here, we exploit a recently implemented numerical algorithm for error-controlled computation of the minimum dimension required for a still accurate reduced mechanism based on automatic time scale decomposition and relaxation of fast modes. We determine species contributions to the active (slow) dynamical modes of the reaction system and exploit this information in combination with quasi-steady-state and partial-equilibrium approximations for explicit model reduction of a novel detailed chemical mechanism for the Ru-catalyzed light-sensitive Belousov-Zhabotinsky reaction. The existence of a minimum dimension of seven is demonstrated to be mandatory for the reduced model to show good quantitative consistency with the full model in numerical simulations. We derive such a maximally reduced seven-variable model from the detailed elementary-step mechanism and demonstrate that it reproduces quantitatively accurately the dynamical features of the full model within a given accuracy tolerance.

  19. Magnetospheric mapping with a quantitative geomagnetic field model

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Mead, G. D.

    1975-01-01

    Mapping the magnetosphere on a dipole geomagnetic field model by projecting field and particle observations onto the model is described. High-latitude field lines are traced between the earth's surface and their intersection with either the equatorial plane or a cross section of the geomagnetic tail, and data from low-altitude orbiting satellites are projected along field lines to the outer magnetosphere. This procedure is analyzed, and the resultant mappings are illustrated. Extension of field lines into the geomagnetic tail and low-altitude determination of the polar cap and cusp are presented. It is noted that while there is good agreement among the various data, more particle measurements are necessary to clear up statistical uncertainties and to facilitate comparison of statistical models.

  20. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    PubMed

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  1. Analysis of protein complexes through model-based biclustering of label-free quantitative AP-MS data

    PubMed Central

    Choi, Hyungwon; Kim, Sinae; Gingras, Anne-Claude; Nesvizhskii, Alexey I

    2010-01-01

    Affinity purification followed by mass spectrometry (AP-MS) has become a common approach for identifying protein–protein interactions (PPIs) and complexes. However, data analysis and visualization often rely on generic approaches that do not take advantage of the quantitative nature of AP-MS. We present a novel computational method, nested clustering, for biclustering of label-free quantitative AP-MS data. Our approach forms bait clusters based on the similarity of quantitative interaction profiles and identifies submatrices of prey proteins showing consistent quantitative association within bait clusters. In doing so, nested clustering effectively addresses the problem of overrepresentation of interactions involving baits proteins as compared with proteins only identified as preys. The method does not require specification of the number of bait clusters, which is an advantage against existing model-based clustering methods. We illustrate the performance of the algorithm using two published intermediate scale human PPI data sets, which are representative of the AP-MS data generated from mammalian cells. We also discuss general challenges of analyzing and interpreting clustering results in the context of AP-MS data. PMID:20571534

  2. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    ERIC Educational Resources Information Center

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  3. Inference of quantitative models of bacterial promoters from time-series reporter gene data.

    PubMed

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for

  4. Inference of Quantitative Models of Bacterial Promoters from Time-Series Reporter Gene Data

    PubMed Central

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for

  5. Quantitative experimental modelling of fragmentation during explosive volcanism

    NASA Astrophysics Data System (ADS)

    Thordén Haug, Ø.; Galland, O.; Gisler, G.

    2012-04-01

    Phreatomagmatic eruptions results from the violent interaction between magma and an external source of water, such as ground water or a lake. This interaction causes fragmentation of the magma and/or the host rock, resulting in coarse-grained (lapilli) to very fine-grained (ash) material. The products of phreatomagmatic explosions are classically described by their fragment size distribution, which commonly follows power laws of exponent D. Such descriptive approach, however, considers the final products only and do not provide information on the dynamics of fragmentation. The aim of this contribution is thus to address the following fundamental questions. What are the physics that govern fragmentation processes? How fragmentation occurs through time? What are the mechanisms that produce power law fragment size distributions? And what are the scaling laws that control the exponent D? To address these questions, we performed a quantitative experimental study. The setup consists of a Hele-Shaw cell filled with a layer of cohesive silica flour, at the base of which a pulse of pressurized air is injected, leading to fragmentation of the layer of flour. The fragmentation process is monitored through time using a high-speed camera. By varying systematically the air pressure (P) and the thickness of the flour layer (h) we observed two morphologies of fragmentation: "lift off" where the silica flour above the injection inlet is ejected upwards, and "channeling" where the air pierces through the layer along sub-vertical conduit. By building a phase diagram, we show that the morphology is controlled by P/dgh, where d is the density of the flour and g is the gravitational acceleration. To quantify the fragmentation process, we developed a Matlab image analysis program, which calculates the number and sizes of the fragments, and so the fragment size distribution, during the experiments. The fragment size distributions are in general described by power law distributions of

  6. A quantitative magnetospheric model derived from spacecraft magnetometer data

    NASA Technical Reports Server (NTRS)

    Mead, G. D.; Fairfield, D. H.

    1975-01-01

    The model is derived by making least squares fits to magnetic field measurements from four Imp satellites. It includes four sets of coefficients, representing different degrees of magnetic disturbance as determined by the range of Kp values. The data are fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the effects of seasonal north-south asymmetries are contained. The expansion is divergence-free, but unlike the usual scalar potential expansion, the model contains a nonzero curl representing currents distributed within the magnetosphere. The latitude at the earth separating open polar cap field lines from field lines closing on the day side is about 5 deg lower than that determined by previous theoretically derived models. At times of high Kp, additional high-latitude field lines extend back into the tail. Near solstice, the separation latitude can be as low as 75 deg in the winter hemisphere. The average northward component of the external field is much smaller than that predicted by theoretical models; this finding indicates the important effects of distributed currents in the magnetosphere.

  7. A quantitative risk model for early lifecycle decision making

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  8. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    ERIC Educational Resources Information Center

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  9. Global existence of solutions and uniform persistence of a diffusive predator-prey model with prey-taxis

    NASA Astrophysics Data System (ADS)

    Wu, Sainan; Shi, Junping; Wu, Boying

    2016-04-01

    This paper proves the global existence and boundedness of solutions to a general reaction-diffusion predator-prey system with prey-taxis defined on a smooth bounded domain with no-flux boundary condition. The result holds for domains in arbitrary spatial dimension and small prey-taxis sensitivity coefficient. This paper also proves the existence of a global attractor and the uniform persistence of the system under some additional conditions. Applications to models from ecology and chemotaxis are discussed.

  10. Global existence and uniqueness of classical solutions for a generalized quasilinear parabolic equation with application to a glioblastoma growth model.

    PubMed

    Wen, Zijuan; Fan, Meng; Asiri, Asim M; Alzahrani, Ebraheem O; El-Dessoky, Mohamed M; Kuang, Yang

    2017-04-01

    This paper studies the global existence and uniqueness of classical solutions for a generalized quasilinear parabolic equation with appropriate initial and mixed boundary conditions. Under some practicable regularity criteria on diffusion item and nonlinearity, we establish the local existence and uniqueness of classical solutions based on a contraction mapping. This local solution can be continued for all positive time by employing the methods of energy estimates, Lp-theory, and Schauder estimate of linear parabolic equations. A straightforward application of global existence result of classical solutions to a density-dependent diffusion model of in vitro glioblastoma growth is also presented.

  11. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  12. Interaction of ascending magma with pre-existing crustal structures: Insights from analogue modeling

    NASA Astrophysics Data System (ADS)

    Le Corvec, N.; Menand, T.; Rowland, J. V.

    2010-12-01

    Magma transport through dikes is a major component of the development of basaltic volcanic fields. Basaltic volcanic fields occur in many different tectonic setting, from tensile (e.g., Camargo Volcanic Field, Mexico) to compressive (e.g., Abu Monogenetic Volcano Group, Japan). However, an important observation is that, independently of their tectonic setting, volcanic fields are characterized by numerous volcanic centers showing clustering and lineaments, each volcanic center typically resulting from a single main eruption. Analyses from Auckland Volcanic Field reveal that, for each eruption, magma was transported from its source and reached the surface at different places within the same field, which raises the important question of the relative importance of 1) the self-propagation of magma through pristine rock, as opposed to 2) the control exerted by pre-existing structures. These two mechanisms have different implications for the alignment of volcanic centers in a field as these may reflect either 1) the state of crustal stress dikes would have experienced (with a tendency to propagate perpendicular to the least compressive stress) or 2) the interaction of propagating dikes with pre-existing crustal faults. In the latter case, lineaments might not be related to the syn-emplacement state of stress of the crust. To address this issue, we have carried out a series of analogue experiments in order to constrain the interaction of a propagating magma-filled dike with superficial pre-existing structures (e.g., fracture, fault, joint system). The experiments involved the injection of air (a buoyant magma analogue) into elastic gelatine solids (crustal rock analogues). Cracks were cut into the upper part of the gelatine solids prior to the injection of air to simulate the presence of pre-existing fractures. The volume of the propagating dikes, their distance from pre-existing fractures and the ambient stress field were systematically varied to assess their influence

  13. Quantitative comparisons of satellite observations and cloud models

    NASA Astrophysics Data System (ADS)

    Wang, Fang

    Microwave radiation interacts directly with precipitating particles and can therefore be used to compare microphysical properties found in models with those found in nature. Lower frequencies (< 37 GHz) can detect the emission signals from the raining clouds over radiometrically cold ocean surfaces while higher frequencies (≥ 37 GHz) are more sensitive to the scattering of the precipitating-sized ice particles in the convective storms over high-emissivity land, which lend them particular capabilities for different applications. Both are explored with a different scenario for each case: a comparison of two rainfall retrievals over ocean and a comparison of a cloud model simulation to satellite observations over land. Both the Goddard Profiling algorithm (GPROF) and European Centre for Medium-Range Weather Forecasts (ECMWF) one-dimensional + four-dimensional variational analysis (1D+4D-Var) rainfall retrievals are inversion algorithms based on the Bayes' theorem. Differences stem primarily from the a-priori information. GPROF uses an observationally generated a-priori database while ECMWF 1D-Var uses the model forecast First Guess (FG) fields. The relative similarity in the two approaches means that comparisons can shed light on the differences that are produced by the a-priori information. Case studies have found that differences can be classified into four categories based upon the agreement in the brightness temperatures (Tbs) and in the microphysical properties of Cloud Water Path (CWP) and Rain Water Path (RWP) space. We found a category of special interest in which both retrievals converge to similar Tb through minimization procedures but produce different CWP and RWP. The similarity in Tb can be attributed to comparable Total Water Path (TWP) between the two retrievals while the disagreement in the microphysics is caused by their different degrees of constraint of the cloud/rain ratio by the observations. This situation occurs frequently and takes up 46

  14. Software applications toward quantitative metabolic flux analysis and modeling.

    PubMed

    Dandekar, Thomas; Fieselmann, Astrid; Majeed, Saman; Ahmed, Zeeshan

    2014-01-01

    Metabolites and their pathways are central for adaptation and survival. Metabolic modeling elucidates in silico all the possible flux pathways (flux balance analysis, FBA) and predicts the actual fluxes under a given situation, further refinement of these models is possible by including experimental isotopologue data. In this review, we initially introduce the key theoretical concepts and different analysis steps in the modeling process before comparing flux calculation and metabolite analysis programs such as C13, BioOpt, COBRA toolbox, Metatool, efmtool, FiatFlux, ReMatch, VANTED, iMAT and YANA. Their respective strengths and limitations are discussed and compared to alternative software. While data analysis of metabolites, calculation of metabolic fluxes, pathways and their condition-specific changes are all possible, we highlight the considerations that need to be taken into account before deciding on a specific software. Current challenges in the field include the computation of large-scale networks (in elementary mode analysis), regulatory interactions and detailed kinetics, and these are discussed in the light of powerful new approaches.

  15. A quantitative assessment of torque-transducer models for magnetoreception

    PubMed Central

    Winklhofer, Michael; Kirschvink, Joseph L.

    2010-01-01

    Although ferrimagnetic material appears suitable as a basis of magnetic field perception in animals, it is not known by which mechanism magnetic particles may transduce the magnetic field into a nerve signal. Provided that magnetic particles have remanence or anisotropic magnetic susceptibility, an external magnetic field will exert a torque and may physically twist them. Several models of such biological magnetic-torque transducers on the basis of magnetite have been proposed in the literature. We analyse from first principles the conditions under which they are viable. Models based on biogenic single-domain magnetite prove both effective and efficient, irrespective of whether the magnetic structure is coupled to mechanosensitive ion channels or to an indirect transduction pathway that exploits the strayfield produced by the magnetic structure at different field orientations. On the other hand, torque-detector models that are based on magnetic multi-domain particles in the vestibular organs turn out to be ineffective. Also, we provide a generic classification scheme of torque transducers in terms of axial or polar output, within which we discuss the results from behavioural experiments conducted under altered field conditions or with pulsed fields. We find that the common assertion that a magnetoreceptor based on single-domain magnetite could not form the basis for an inclination compass does not always hold. PMID:20086054

  16. Afference copy as a quantitative neurophysiological model for consciousness.

    PubMed

    Cornelis, Hugo; Coop, Allan D

    2014-06-01

    Consciousness is a topic of considerable human curiosity with a long history of philosophical analysis and debate. We consider there is nothing particularly complicated about consciousness when viewed as a necessary process of the vertebrate nervous system. Here, we propose a physiological "explanatory gap" is created during each present moment by the temporal requirements of neuronal activity. The gap extends from the time exteroceptive and proprioceptive stimuli activate the nervous system until they emerge into consciousness. During this "moment", it is impossible for an organism to have any conscious knowledge of the ongoing evolution of its environment. In our schematic model, a mechanism of "afference copy" is employed to bridge the explanatory gap with consciously experienced percepts. These percepts are fabricated from the conjunction of the cumulative memory of previous relevant experience and the given stimuli. They are structured to provide the best possible prediction of the expected content of subjective conscious experience likely to occur during the period of the gap. The model is based on the proposition that the neural circuitry necessary to support consciousness is a product of sub/preconscious reflexive learning and recall processes. Based on a review of various psychological and neurophysiological findings, we develop a framework which contextualizes the model and briefly discuss further implications.

  17. Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides

    PubMed Central

    Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  18. Quantitative proteomics by metabolic labeling of model organisms.

    PubMed

    Gouw, Joost W; Krijgsveld, Jeroen; Heck, Albert J R

    2010-01-01

    In the biological sciences, model organisms have been used for many decades and have enabled the gathering of a large proportion of our present day knowledge of basic biological processes and their derailments in disease. Although in many of these studies using model organisms, the focus has primarily been on genetics and genomics approaches, it is important that methods become available to extend this to the relevant protein level. Mass spectrometry-based proteomics is increasingly becoming the standard to comprehensively analyze proteomes. An important transition has been made recently by moving from charting static proteomes to monitoring their dynamics by simultaneously quantifying multiple proteins obtained from differently treated samples. Especially the labeling with stable isotopes has proved an effective means to accurately determine differential expression levels of proteins. Among these, metabolic incorporation of stable isotopes in vivo in whole organisms is one of the favored strategies. In this perspective, we will focus on methodologies to stable isotope label a variety of model organisms in vivo, ranging from relatively simple organisms such as bacteria and yeast to Caenorhabditis elegans, Drosophila, and Arabidopsis up to mammals such as rats and mice. We also summarize how this has opened up ways to investigate biological processes at the protein level in health and disease, revealing conservation and variation across the evolutionary tree of life.

  19. Quantitative Risk Modeling of Fire on the International Space Station

    NASA Technical Reports Server (NTRS)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  20. Using Item-Type Performance Covariance to Improve the Skill Model of an Existing Tutor

    ERIC Educational Resources Information Center

    Pavlik, Philip I., Jr.; Cen, Hao; Wu, Lili; Koedinger, Kenneth R.

    2008-01-01

    Using data from an existing pre-algebra computer-based tutor, we analyzed the covariance of item-types with the goal of describing a more effective way to assign skill labels to item-types. Analyzing covariance is important because it allows us to place the skills in a related network in which we can identify the role each skill plays in learning…

  1. Existence Theorems for Vortices in the Aharony-Bergman-Jaferis-Maldacena Model

    NASA Astrophysics Data System (ADS)

    Han, Xiaosen; Yang, Yisong

    2015-01-01

    A series of sharp existence and uniqueness theorems are established for the multiple vortex solutions in the supersymmetric Chern-Simons-Higgs theory formalism of Aharony, Bergman, Jaferis, and Maldacena, for which the Higgs bosons and Dirac fermions lie in the bifundamental representation of the general gauge symmetry group . The governing equations are of the BPS type and derived by Kim, Kim, Kwon, and Nakajima in the mass-deformed framework labeled by a continuous parameter.

  2. Concentric Coplanar Capacitive Sensor System with Quantitative Model

    NASA Technical Reports Server (NTRS)

    Bowler, Nicola (Inventor); Chen, Tianming (Inventor)

    2014-01-01

    A concentric coplanar capacitive sensor includes a charged central disc forming a first electrode, an outer annular ring coplanar with and outer to the charged central disc, the outer annular ring forming a second electrode, and a gap between the charged central disc and the outer annular ring. The first electrode and the second electrode may be attached to an insulative film. A method provides for determining transcapacitance between the first electrode and the second electrode and using the transcapacitance in a model that accounts for a dielectric test piece to determine inversely the properties of the dielectric test piece.

  3. Comparison of existing models to simulate anaerobic digestion of lipid-rich waste.

    PubMed

    Béline, F; Rodriguez-Mendez, R; Girault, R; Bihan, Y Le; Lessard, P

    2017-02-01

    Models for anaerobic digestion of lipid-rich waste taking inhibition into account were reviewed and, if necessary, adjusted to the ADM1 model framework in order to compare them. Experimental data from anaerobic digestion of slaughterhouse waste at an organic loading rate (OLR) ranging from 0.3 to 1.9kgVSm(-3)d(-1) were used to compare and evaluate models. Experimental data obtained at low OLRs were accurately modeled whatever the model thereby validating the stoichiometric parameters used and influent fractionation. However, at higher OLRs, although inhibition parameters were optimized to reduce differences between experimental and simulated data, no model was able to accurately simulate accumulation of substrates and intermediates, mainly due to the wrong simulation of pH. A simulation using pH based on experimental data showed that acetogenesis and methanogenesis were the most sensitive steps to LCFA inhibition and enabled identification of the inhibition parameters of both steps.

  4. Existence and time-discretization for the finite-strain Souza-Auricchio constitutive model for shape-memory alloys

    NASA Astrophysics Data System (ADS)

    Frigeri, Sergio; Stefanelli, Ulisse

    2012-01-01

    We prove the global existence of solutions for a shape-memory alloys constitutive model at finite strains. The model has been presented in Evangelista et al. (Int J Numer Methods Eng 81(6):761-785, 2010) and corresponds to a suitable finite-strain version of the celebrated Souza-Auricchio model for SMAs (Auricchio and Petrini in Int J Numer Methods Eng 55:1255-1284, 2002; Souza et al. in J Mech A Solids 17:789-806, 1998). We reformulate the model in purely variational fashion under the form of a rate-independent process. Existence of suitably weak (energetic) solutions to the model is obtained by passing to the limit within a constructive time-discretization procedure.

  5. A quantitative and dynamic model for plant stem cell regulation.

    PubMed

    Geier, Florian; Lohmann, Jan U; Gerstung, Moritz; Maier, Annette T; Timmer, Jens; Fleck, Christian

    2008-01-01

    Plants maintain pools of totipotent stem cells throughout their entire life. These stem cells are embedded within specialized tissues called meristems, which form the growing points of the organism. The shoot apical meristem of the reference plant Arabidopsis thaliana is subdivided into several distinct domains, which execute diverse biological functions, such as tissue organization, cell-proliferation and differentiation. The number of cells required for growth and organ formation changes over the course of a plants life, while the structure of the meristem remains remarkably constant. Thus, regulatory systems must be in place, which allow for an adaptation of cell proliferation within the shoot apical meristem, while maintaining the organization at the tissue level. To advance our understanding of this dynamic tissue behavior, we measured domain sizes as well as cell division rates of the shoot apical meristem under various environmental conditions, which cause adaptations in meristem size. Based on our results we developed a mathematical model to explain the observed changes by a cell pool size dependent regulation of cell proliferation and differentiation, which is able to correctly predict CLV3 and WUS over-expression phenotypes. While the model shows stem cell homeostasis under constant growth conditions, it predicts a variation in stem cell number under changing conditions. Consistent with our experimental data this behavior is correlated with variations in cell proliferation. Therefore, we investigate different signaling mechanisms, which could stabilize stem cell number despite variations in cell proliferation. Our results shed light onto the dynamic constraints of stem cell pool maintenance in the shoot apical meristem of Arabidopsis in different environmental conditions and developmental states.

  6. A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction

    ERIC Educational Resources Information Center

    Cohen, Anat; Nachmias, Rafi

    2006-01-01

    This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…

  7. Evaluation of a quantitative phosphorus transport model for potential improvement of southern phosphorus indices

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Due to a shortage of available phosphorus (P) loss data sets, simulated data from a quantitative P transport model could be used to evaluate a P-index. However, the model would need to accurately predict the P loss data sets that are available. The objective of this study was to compare predictions ...

  8. Had the Planet Mars Not Existed: Kepler's Equant Model and Its Physical Consequences

    ERIC Educational Resources Information Center

    Bracco, C.; Provost, J.P.

    2009-01-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal…

  9. Discrete symmetry enhancement in non-Abelian models and the existence of asymptotic freedom

    NASA Astrophysics Data System (ADS)

    Patrascioiu, Adrian; Seiler, Erhard

    2001-09-01

    We study the universality between a discrete spin model with icosahedral symmetry and the O(3) model in two dimensions. For this purpose we study numerically the renormalized two-point functions of the spin field and the four point coupling constant. We find that those quantities seem to have the same continuum limits in the two models. This has far reaching consequences, because the icosahedron model is not asymptotically free in the sense that the coupling constant proposed by Lüscher, Weisz, and Wolff [Nucl. Phys. B359, 221 (1991)] does not approach zero in the short distance limit. By universality this then also applies to the O(3) model, contrary to the predictions of perturbation theory.

  10. A quantitative confidence signal detection model: 1. Fitting psychometric functions

    PubMed Central

    Yi, Yongwoo

    2016-01-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. PMID:26763777

  11. Canalization, genetic assimilation and preadaptation. A quantitative genetic model.

    PubMed Central

    Eshel, I; Matessi, C

    1998-01-01

    We propose a mathematical model to analyze the evolution of canalization for a trait under stabilizing selection, where each individual in the population is randomly exposed to different environmental conditions, independently of its genotype. Without canalization, our trait (primary phenotype) is affected by both genetic variation and environmental perturbations (morphogenic environment). Selection of the trait depends on individually varying environmental conditions (selecting environment). Assuming no plasticity initially, morphogenic effects are not correlated with the direction of selection in individual environments. Under quite plausible assumptions we show that natural selection favors a system of canalization that tends to repress deviations from the phenotype that is optimal in the most common selecting environment. However, many experimental results, dating back to Waddington and others, indicate that natural canalization systems may fail under extreme environments. While this can be explained as an impossibility of the system to cope with extreme morphogenic pressure, we show that a canalization system that tends to be inactivated in extreme environments is even more advantageous than rigid canalization. Moreover, once this adaptive canalization is established, the resulting evolution of primary phenotype enables substantial preadaptation to permanent environmental changes resembling extreme niches of the previous environment. PMID:9691063

  12. A quantitative model of the biogeochemical transport of iodine

    NASA Astrophysics Data System (ADS)

    Weng, H.; Ji, Z.; Weng, J.

    2010-12-01

    Iodine deficiency disorders (IDD) are among the world’s most prevalent public health problems yet preventable by dietary iodine supplements. To better understand the biogeochemical behavior of iodine and to explore safer and more efficient ways of iodine supplementation as alternatives to iodized salt, we studied the behavior of iodine as it is absorbed, accumulated and released by plants. Using Chinese cabbage as a model system and the 125I tracing technique, we established that plants uptake exogenous iodine from soil, most of which are transported to the stem and leaf tissue. The level of absorption of iodine by plants is dependent on the iodine concentration in soil, as well as the soil types that have different iodine-adsorption capacity. The leaching experiment showed that the remainder soil content of iodine after leaching is determined by the iodine-adsorption ability of the soil and the pH of the leaching solution, but not the volume of leaching solution. Iodine in soil and plants can also be released to the air via vaporization in a concentration-dependent manner. This study provides a scientific basis for developing new methods to prevent IDD through iodized vegetable production.

  13. High-response piezoelectricity modeled quantitatively near a phase boundary

    NASA Astrophysics Data System (ADS)

    Newns, Dennis M.; Kuroda, Marcelo A.; Cipcigan, Flaviu S.; Crain, Jason; Martyna, Glenn J.

    2017-01-01

    Interconversion of mechanical and electrical energy via the piezoelectric effect is fundamental to a wide range of technologies. The discovery in the 1990s of giant piezoelectric responses in certain materials has therefore opened new application spaces, but the origin of these properties remains a challenge to our understanding. A key role is played by the presence of a structural instability in these materials at compositions near the "morphotropic phase boundary" (MPB) where the crystal structure changes abruptly and the electromechanical responses are maximal. Here we formulate a simple, unified theoretical description which accounts for extreme piezoelectric response, its observation at compositions near the MPB, accompanied by ultrahigh dielectric constant and mechanical compliances with rather large anisotropies. The resulting model, based upon a Landau free energy expression, is capable of treating the important domain engineered materials and is found to be predictive while maintaining simplicity. It therefore offers a general and powerful means of accounting for the full set of signature characteristics in these functional materials including volume conserving sum rules and strong substrate clamping effects.

  14. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    PubMed

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34

  15. Quantitative Modeling of Entangled Polymer Rheology: Experiments, Tube Models and Slip-Link Simulations

    NASA Astrophysics Data System (ADS)

    Desai, Priyanka Subhash

    Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd

  16. Comparison of approaches for incorporating new information into existing risk prediction models.

    PubMed

    Grill, Sonja; Ankerst, Donna P; Gail, Mitchell H; Chatterjee, Nilanjan; Pfeiffer, Ruth M

    2017-03-30

    We compare the calibration and variability of risk prediction models that were estimated using various approaches for combining information on new predictors, termed 'markers', with parameter information available for other variables from an earlier model, which was estimated from a large data source. We assess the performance of risk prediction models updated based on likelihood ratio (LR) approaches that incorporate dependence between new and old risk factors as well as approaches that assume independence ('naive Bayes' methods). We study the impact of estimating the LR by (i) fitting a single model to cases and non-cases when the distribution of the new markers is in the exponential family or (ii) fitting separate models to cases and non-cases. We also evaluate a new constrained maximum likelihood method. We study updating the risk prediction model when the new data arise from a cohort and extend available methods to accommodate updating when the new data source is a case-control study. To create realistic correlations between predictors, we also based simulations on real data on response to antiviral therapy for hepatitis C. From these studies, we recommend the LR method fit using a single model or constrained maximum likelihood. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models

    NASA Astrophysics Data System (ADS)

    de Diesbach, Pablo Brice

    This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.

  18. Epistasis and Quantitative Traits: Using Model Organisms to Study Gene-Gene Interactions

    PubMed Central

    Mackay, Trudy F. C.

    2014-01-01

    Summary The role of epistasis in the genetic architecture of quantitative traits is controversial, despite the biological plausibility that non-linear molecular interactions underpin the genotype-phenotype map. This controversy arises because most genetic variation for quantitative traits is additive. However, additive variance is consistent with pervasive epistatic gene action. Here, I discuss experimental designs to detect the contribution of epistasis to quantitative trait phenotypes in model organisms. These studies indicate that epistatic gene action is common, and that additivity can be an emergent property of underlying genetic interaction networks. Epistasis causes hidden quantitative genetic variation in natural populations and could be responsible for the small additive effects, missing heritability and lack of replication typically observed for human complex traits. PMID:24296533

  19. The Lightning Rod Model: a Genesis for Quantitative Near-Field Spectroscopy

    NASA Astrophysics Data System (ADS)

    McLeod, Alexander; Andreev, Gregory; Dominguez, Gerardo; Thiemens, Mark; Fogler, Michael; Basov, D. N.

    2013-03-01

    Near-field infrared spectroscopy has the proven ability to resolve optical contrasts in materials at deeply sub-wavelength scales across a broad range of infrared frequencies. In principle, the technique enables sub-diffractional optical identification of chemical compositions within nanostructured and naturally heterogeneous samples. However current models of probe-sample optical interaction, while qualitatively descriptive, cannot quantitatively explain infrared near-field spectra, especially for strongly resonant sample materials. We present a new first-principles model of near-field interaction, and demonstrate its superb agreement with infrared near-field spectra measured for thin films of silicon dioxide and the strongly phonon-resonant material silicon carbide. Using this model we reveal the role of probe geometry and surface mode dispersion in shaping the measured near-field spectrum, establishing its quantitative relationship with the dielectric properties of the sample. This treatment offers a route to the quantitative determination of optical constants at the nano-scale.

  20. Evaluating Alternative Methodologies for Capturing As-Built Building Information Models (BIM) For Existing Facilities

    DTIC Science & Technology

    2010-08-01

    Scale Definitions (NASA TLX 2003). Title Endpoints Descriptions Mental demand Low/High How much mental and perceptual activity was required (e.g...for realistic, scaled models. • Limitations o Full facility model/photos requires scanning each room o COBIE data capturing to be accomplished...Ultra-Mobile PC Panasonic Toughbook U1 Fully Rugged UMPC $2,499 Each X Panasonic U1 (http://catalog2. panas onic.com) Additional

  1. Can existing climate models be used to study anthropogenic changes in tropical cyclone climate

    SciTech Connect

    Broccoli, A.J.; Manabe, S.

    1990-10-01

    The utility of current generation climate models for studying the influence of greenhouse warming on the tropical storm climatology is examined. A method developed to identify tropical cyclones is applied to a series of model integrations. The global distribution of tropical storms is simulated by these models in a generally realistic manner. While the model resolution is insufficient to reproduce the fine structure of tropical cyclones, the simulated storms become more realistic as resolution is increased. To obtain a preliminary estimate of the response of the tropical cyclone climatology, CO{sub 2} was doubled using models with varying cloud treatments and different horizontal resolutions. In the experiment with prescribed cloudiness, the number of storm-days, a combined measure of the number and duration of tropical storms, undergoes a statistically significant reduction of the number of storm-days is indicated in the experiment with cloud feedback. In both cases the response is independent of horizontal resolution. While the inconclusive nature of these experimental results highlights the uncertainties that remain in examining the details of greenhouse-gas induced climate change, the ability of the models to qualitatively simulate the tropical storm climatology suggests that they are appropriate tools for this problem.

  2. 3D Numerical Modeling of the Propagation of Hydraulic Fracture at Its Intersection with Natural (Pre-existing) Fracture

    NASA Astrophysics Data System (ADS)

    Dehghan, Ali Naghi; Goshtasbi, Kamran; Ahangari, Kaveh; Jin, Yan; Bahmani, Aram

    2017-02-01

    A variety of 3D numerical models were developed based on hydraulic fracture experiments to simulate the propagation of hydraulic fracture at its intersection with natural (pre-existing) fracture. Since the interaction between hydraulic and pre-existing fractures is a key condition that causes complex fracture patterns, the extended finite element method was employed in ABAQUS software to simulate the problem. The propagation of hydraulic fracture in a fractured medium was modeled in two horizontal differential stresses (Δ σ) of 5e6 and 10e6 Pa considering different strike and dip angles of pre-existing fracture. The rate of energy release was calculated in the directions of hydraulic and pre-existing fractures (G_{{frac}} /G_{{rock}}) at their intersection point to determine the fracture behavior. Opening and crossing were two dominant fracture behaviors during the hydraulic and pre-existing fracture interaction at low and high differential stress conditions, respectively. The results of numerical studies were compared with those of experimental models, showing a good agreement between the two to validate the accuracy of the models. Besides the horizontal differential stress, strike and dip angles of the natural (pre-existing) fracture, the key finding of this research was the significant effect of the energy release rate on the propagation behavior of the hydraulic fracture. This effect was more prominent under the influence of strike and dip angles, as well as differential stress. The obtained results can be used to predict and interpret the generation of complex hydraulic fracture patterns in field conditions.

  3. Utilization of data estimation via existing models, within a tiered data quality system, for populating species sensitivity distributions

    EPA Science Inventory

    The acquisition toxicity test data of sufficient quality from open literature to fulfill taxonomic diversity requirements can be a limiting factor in the creation of new 304(a) Aquatic Life Criteria. The use of existing models (WebICE and ACE) that estimate acute and chronic eff...

  4. Structural and Stratigraphic Evolution of the Iberia and Newfoundland Rifted Margins: A Quantitative Modeling Approach

    NASA Astrophysics Data System (ADS)

    Mohn, G.; Karner, G. D.; Manatschal, G.; Johnson, C. A.

    2014-12-01

    Rifted margins develop generally through polyphased extensional events leading eventually to break-up. We investigate the spatial and temporal evolution of the Iberia-Newfoundland rifted margin from its Permian post-orogenic stage to early Cretaceous break-up. We have applied Quantitative Basin Analysis to integrate seismic stratigraphic interpretations and drill hole data of representative sections across the Iberia-Newfoundland margins with kinematic models for the thinning of the lithosphere and subsequent isostatic readjustment. Our goal is to predict the distribution of extension and thinning, environments of deposition, crustal structure and subsidence history as functions of space and time. The first sediments deposited on the Iberian continental crust were in response to Permian lithospheric thinning, associated with magmatic underplating and subsequent thermal re-equilibration of the lithosphere. During late Triassic-early Jurassic rifting, a broadly distributed depth-independent lithospheric extension occurred, followed by late Jurassic rifting that increasingly focused with time and became depth-dependent during the early Cretaceous. However, there exists a temporality in the along-strike deformation of the Iberia-Newfoundland margin: significant Valanginian-Hauterivian deformation characterizes the northern Galicia Bank-Flemish Cap while the southern Iberian-Newfoundland region is characterized by Tithonian-early Berriasian extension. Deformation localized with time on both margins leading to late Aptian break-up. To match the distribution and magnitude of subsidence across the profiles requires significant thinning of middle/lower crustal level and subcontinental lithospheric mantle, leading to the formation of the hyper-extended domains. The late-stage deformation of both margins was characterized by a predominantly brittle deformation of the residual continental crust, leading to exhumation of subcontinental mantle and ultimately to seafloor

  5. Had the planet Mars not existed: Kepler's equant model and its physical consequences

    NASA Astrophysics Data System (ADS)

    Bracco, C.; Provost, J.-P.

    2009-09-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal acceleration with an r-2 dependence on the distance to the Sun. If this dependence is assumed to be universal, Kepler's third law follows immediately. This elementary exercise in kinematics for undergraduates emphasizes the proximity of the equant model coming from ancient Greece with our present knowledge. It adds to its historical interest a didactical relevance concerning, in particular, the discussion of the Aristotelian or Newtonian conception of motion.

  6. The Existence and Stability Analysis of the Equilibria in Dengue Disease Infection Model

    NASA Astrophysics Data System (ADS)

    Anggriani, N.; Supriatna, A. K.; Soewono, E.

    2015-06-01

    In this paper we formulate an SIR (Susceptible - Infective - Recovered) model of Dengue fever transmission with constant recruitment. We found a threshold parameter K0, known as the Basic Reproduction Number (BRN). This model has two equilibria, disease-free equilibrium and endemic equilibrium. By constructing suitable Lyapunov function, we show that the disease- free equilibrium is globally asymptotic stable whenever BRN is less than one and when it is greater than one, the endemic equilibrium is globally asymptotic stable. Numerical result shows the dynamic of each compartment together with effect of multiple bio-agent intervention as a control to the dengue transmission.

  7. Existing Whole-House Solutions Case Study: Community-Scale Energy Modeling - Southeastern United States

    SciTech Connect

    2014-12-01

    Community-scale energy modeling and testing are useful for determining energy conservation measures that will effectively reduce energy use. To that end, IBACOS analyzed pre-retrofit daily utility data to sort homes by energy consumption, allowing for better targeting of homes for physical audits. Following ASHRAE Guideline 14 normalization procedures, electricity consumption of 1,166 all-electric, production-built homes was modeled. The homes were in two communities: one built in the 1970s and the other in the mid-2000s.

  8. Benthic-Pelagic Coupling in Biogeochemical and Climate Models: Existing Approaches, Recent developments and Roadblocks

    NASA Astrophysics Data System (ADS)

    Arndt, Sandra

    2016-04-01

    Marine sediments are key components in the Earth System. They host the largest carbon reservoir on Earth, provide the only long term sink for atmospheric CO2, recycle nutrients and represent the most important climate archive. Biogeochemical processes in marine sediments are thus essential for our understanding of the global biogeochemical cycles and climate. They are first and foremost, donor controlled and, thus, driven by the rain of particulate material from the euphotic zone and influenced by the overlying bottom water. Geochemical species may undergo several recycling loops (e.g. authigenic mineral precipitation/dissolution) before they are either buried or diffuse back to the water column. The tightly coupled and complex pelagic and benthic process interplay thus delays recycling flux, significantly modifies the depositional signal and controls the long-term removal of carbon from the ocean-atmosphere system. Despite the importance of this mutual interaction, coupled regional/global biogeochemical models and (paleo)climate models, which are designed to assess and quantify the transformations and fluxes of carbon and nutrients and evaluate their response to past and future perturbations of the climate system either completely neglect marine sediments or incorporate a highly simplified representation of benthic processes. On the other end of the spectrum, coupled, multi-component state-of-the-art early diagenetic models have been successfully developed and applied over the past decades to reproduce observations and quantify sediment-water exchange fluxes, but cannot easily be coupled to pelagic models. The primary constraint here is the high computation cost of simulating all of the essential redox and equilibrium reactions within marine sediments that control carbon burial and benthic recycling fluxes: a barrier that is easily exacerbated if a variety of benthic environments are to be spatially resolved. This presentation provides an integrative overview of

  9. What are the unique attributes of potassium that challenge existing nutrient uptake models?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil potassium (K) availability and acquisition by plant root systems are controlled by complex, interacting processes that make it difficult to assess their individual impacts on crop growth. Mechanistic, mathematical models provide an important tool to enhance understanding of these processes, and...

  10. National Interlending Systems: A Comparative Study of Existing Systems and Possible Models. Revised.

    ERIC Educational Resources Information Center

    Line, Maurice B.; And Others

    Based on research completed in 1977 and comments on a 1979 preliminary version of this report, this work evaluates current interlending practices among participants in the United Nations Educational, Scientific, and Cultural Organization (UNESCO), and proposes various models of interlibrary lending provision. The paper outlines the elements…

  11. Racial Differences in the Performance of Existing Risk Prediction Models for Incident Type 2 Diabetes: The CARDIA Study

    PubMed Central

    Wellenius, Gregory A.; Carnethon, Mercedes R.; Loucks, Eric B.; Carson, April P.; Luo, Xi; Kiefe, Catarina I.; Gjelsvik, Annie; Gunderson, Erica P.; Eaton, Charles B.; Wu, Wen-Chih

    2016-01-01

    OBJECTIVE In 2010, the American Diabetes Association (ADA) added hemoglobin A1c (A1C) to the guidelines for diagnosing type 2 diabetes. However, existing models for predicting diabetes risk were developed prior to the widespread adoption of A1C. Thus, it remains unknown how well existing diabetes risk prediction models predict incident diabetes defined according to the ADA 2010 guidelines. Accordingly, we examined the performance of an existing diabetes prediction model applied to a cohort of African American (AA) and white adults from the Coronary Artery Risk Development Study in Young Adults (CARDIA). RESEARCH DESIGN AND METHODS We evaluated the performance of the Atherosclerosis Risk in Communities (ARIC) diabetes risk prediction model among 2,456 participants in CARDIA free of diabetes at the 2005–2006 exam and followed for 5 years. We evaluated model discrimination, calibration, and integrated discrimination improvement with incident diabetes defined by ADA 2010 guidelines before and after adding baseline A1C to the prediction model. RESULTS In the overall cohort, re-estimating the ARIC model in the CARDIA cohort resulted in good discrimination for the prediction of 5-year diabetes risk (area under the curve [AUC] 0.841). Adding baseline A1C as a predictor improved discrimination (AUC 0.841 vs. 0.863, P = 0.03). In race-stratified analyses, model discrimination was significantly higher in whites than AA (AUC AA 0.816 vs. whites 0.902; P = 0.008). CONCLUSIONS Addition of A1C to the ARIC diabetes risk prediction model improved performance overall and in racial subgroups. However, for all models examined, discrimination was better in whites than AA. Additional studies are needed to further improve diabetes risk prediction among AA. PMID:26628420

  12. Modelling of the shielding capabilities of the existing solid radioactive waste storages at Ignalina NPP.

    PubMed

    Smaizys, Arturas; Poskas, Povilas; Ragaisis, Valdas

    2005-01-01

    There is only one nuclear power plant in Lithuania--Ignalina NPP (INPP). The INPP operates two similar units with design electrical power of 1500 MW. The units were commissioned in 1983 and 1987 respectively. From the beginning of the INPP operation all generated solid radioactive waste was collected and stored at the Soviet type solid radwaste facility located at INPP site. The INPP solid radwaste storage facility consists of four buildings, namely building No. 155, No. 155/1, No. 157 and No. 157/1. The buildings of the INPP solid radwaste storage facility are reinforced concrete structures above ground. State Nuclear Safety Inspectorate (VATESI) has specified that particular safety analysis must be performed for existing radioactive waste storage facilities of the INPP. As part of the safety analysis, shielding capabilities of the walls and roofs of these buildings were analysed. This paper presents radiation shielding analysis of the buildings No. 157 and No. 157/1 that are still in operation. The buildings No. 155 and No. 155/1 are already filled up with the waste and no additional waste loading is expected.

  13. The Power of a Good Idea: Quantitative Modeling of the Spread of Ideas from Epidemiological Models

    SciTech Connect

    Bettencourt, L. M. A.; Cintron-Arias, A.; Kaiser, D. I.; Castillo-Chavez, C.

    2005-05-05

    The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.

  14. Quantitative petri net model of gene regulated metabolic networks in the cell.

    PubMed

    Chen, Ming; Hofestädt, Ralf

    2011-01-01

    A method to exploit hybrid Petri nets (HPN) for quantitatively modeling and simulating gene regulated metabolic networks is demonstrated. A global kinetic modeling strategy and Petri net modeling algorithm are applied to perform the bioprocess functioning and model analysis. With the model, the interrelations between pathway analysis and metabolic control mechanism are outlined. Diagrammatical results of the dynamics of metabolites are simulated and observed by implementing a HPN tool, Visual Object Net ++. An explanation of the observed behavior of the urea cycle is proposed to indicate possibilities for metabolic engineering and medical care. Finally, the perspective of Petri nets on modeling and simulation of metabolic networks is discussed.

  15. A quantitative model for the rate-limiting process of UGA alternative assignments to stop and selenocysteine codons

    PubMed Central

    Chuang, Kai-Neng; Yen, Hsueh-Chi S.

    2017-01-01

    Ambiguity in genetic codes exists in cases where certain stop codons are alternatively used to encode non-canonical amino acids. In selenoprotein transcripts, the UGA codon may either represent a translation termination signal or a selenocysteine (Sec) codon. Translating UGA to Sec requires selenium and specialized Sec incorporation machinery such as the interaction between the SECIS element and SBP2 protein, but how these factors quantitatively affect alternative assignments of UGA has not been fully investigated. We developed a model simulating the UGA decoding process. Our model is based on the following assumptions: (1) charged Sec-specific tRNAs (Sec-tRNASec) and release factors compete for a UGA site, (2) Sec-tRNASec abundance is limited by the concentrations of selenium and Sec-specific tRNA (tRNASec) precursors, and (3) all synthesis reactions follow first-order kinetics. We demonstrated that this model captured two prominent characteristics observed from experimental data. First, UGA to Sec decoding increases with elevated selenium availability, but saturates under high selenium supply. Second, the efficiency of Sec incorporation is reduced with increasing selenoprotein synthesis. We measured the expressions of four selenoprotein constructs and estimated their model parameters. Their inferred Sec incorporation efficiencies did not correlate well with their SECIS-SBP2 binding affinities, suggesting the existence of additional factors determining the hierarchy of selenoprotein synthesis under selenium deficiency. This model provides a framework to systematically study the interplay of factors affecting the dual definitions of a genetic codon. PMID:28178267

  16. Existence of complex patterns in the Beddington-DeAngelis predator-prey model.

    PubMed

    Haque, Mainul

    2012-10-01

    The study of reaction-diffusion system constitutes some of the most fascinating developments of late twentieth century mathematics and biology. This article investigates complexity and chaos in the complex patterns dynamics of the original Beddington-DeAngelis predator-prey model which concerns the influence of intra species competition among predators. We investigate the emergence of complex patterns through reaction-diffusion equations in this system. We derive the conditions for the codimension-2 Turing-Hopf, Turing-Saddle-node, and Turing-Transcritical bifurcation, and the codimension-3 Turing-Takens-Bogdanov bifurcation. These bifurcations give rise to very complex patterns that have not been observed in previous predator-prey models. A large variety of different types of long-term behavior, including homogenous distributions and stationary spatial patterns are observed through extensive numerical simulations with experimentally-based parameter values. Finally, a discussion of the ecological implications of the analytical and numerical results concludes the paper.

  17. Modeling Aseismic and Seismic Slip Induced by Fluid Injection on Pre-existing Faults Governed by Rate-and-state Friction

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Harrington, R. M.; Deng, K.; Larochelle, S.

    2015-12-01

    Pore fluid pressure evolution on pre-existing faults in the vicinity of fluid injection activity has been postulated as a key factor for inducing both moderate size earthquakes and aseismic slip. In this study, we develop a numerical model incorporating rate-and-state friction properties to investigate fault slip initiated by various perturbations, including fluid injection and transient dynamic stress changes. In the framework of rate-and-state friction, external stress perturbations and their spatiotemporal variation can be coupled to fault frictional strength evolution in a single computational procedure. Hence it provides a quantitative understanding of the source processes (i.e., slip rate, rupture area, triggering threshold) of a spectrum of slip modes under the influence of anthropogenic and natural perturbations. Preliminary results show both the peak and cumulative Coulomb stress change values can affect the transition from aseismic to seismic slip and the amount of slip. We plan to apply the physics-based slip model to induced earthquakes in western Canada sedimentary basins. In particular, we will focus on the Fox Creek sequences in north Alberta, where two earthquakes of ML4.4 (2015/01/23) and Mw4.6 (2015/06/13) were potentially induced by nearby hydraulic fracturing activity. The geometry of the seismogenic faults of the two events will be constrained by relocated seismicity as well as their focal mechanism solutions. Rate-and-state friction parameters and ambient stress conditions will be constrained by identifying dynamic triggering criteria using a matched-filter approach. A poroelastic model will be used to estimate the pore pressure history resolved onto the fault plane due to fluid injection. By comparing modeled earthquake source parameters to those estimated from seismic analysis, we aim to quantitatively discern the nucleation conditions of injection-induced versus dynamically triggered earthquakes, and aseismic versus seismic slip modes.

  18. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    NASA Astrophysics Data System (ADS)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2016-07-01

    coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.

  19. Cholera Modeling: Challenges to Quantitative Analysis and Predicting the Impact of Interventions

    PubMed Central

    Grad, Yonatan H.; Miller, Joel C.; Lipsitch, Marc

    2012-01-01

    Several mathematical models of epidemic cholera have recently been proposed in response to outbreaks in Zimbabwe and Haiti. These models aim to estimate the dynamics of cholera transmission and the impact of possible interventions, with a goal of providing guidance to policy-makers in deciding among alternative courses of action, including vaccination, provision of clean water, and antibiotics. Here we discuss concerns about model misspecification, parameter uncertainty, and spatial heterogeneity intrinsic to models for cholera. We argue for caution in interpreting quantitative predictions, particularly predictions of the effectiveness of interventions. We specify sensitivity analyses that would be necessary to improve confidence in model-based quantitative prediction, and suggest types of monitoring in future epidemic settings that would improve analysis and prediction. PMID:22659546

  20. Modeling Magnetite Reflectance Spectra Using Hapke Theory and Existing Optical Constants

    NASA Technical Reports Server (NTRS)

    Roush, T. L.; Blewett, D. T.; Cahill, J. T. S.

    2016-01-01

    Magnetite is an accessory mineral found in terrestrial environments, some meteorites, and the lunar surface. The reflectance of magnetite powers is relatively low [1], and this property makes it an analog for other dark Fe- or Ti-bearing components, particularly ilmenite on the lunar surface. The real and imaginary indices of refraction (optical constants) for magnetite are available in the literature [2-3], and online [4]. Here we use these values to calculate the reflectance of particulates and compare these model spectra to reflectance measurements of magnetite available on-line [5].

  1. The effects of the overline running model of the high-speed trains on the existing lines

    NASA Astrophysics Data System (ADS)

    Qian, Yong-Sheng; Zeng, Jun-Wei; Zhang, Xiao-Long; Wang, Jia-Yuan; Lv, Ting-Ting

    2016-09-01

    This paper studies the effect on the existing railway which is made by the train with 216 km/h high-speed when running across over the existing railway. The influence on the railway carrying capacity which is made by the transportation organization mode of the existing railway is analyzed under different parking modes of high-speed trains as well. In order to further study the departure intervals of the train, the average speed and the delay of the train, an automata model under these four-aspects is established. The results of the research in this paper could serve as the theoretical references to the newly built high-speed railways.

  2. Water Use Conservation Scenarios for the Mississippi Delta Using an Existing Regional Groundwater Flow Model

    NASA Astrophysics Data System (ADS)

    Barlow, J. R.; Clark, B. R.

    2010-12-01

    The alluvial plain in northwestern Mississippi, locally referred to as the Delta, is a major agricultural area, which contributes significantly to the economy of Mississippi. Land use in this area can be greater than 90 percent agriculture, primarily for growing catfish, corn, cotton, rice, and soybean. Irrigation is needed to smooth out the vagaries of climate and is necessary for the cultivation of rice and for the optimization of corn and soybean. The Mississippi River Valley alluvial (MRVA) aquifer, which underlies the Delta, is the sole source of water for irrigation, and over use of the aquifer has led to water-level declines, particularly in the central region. The Yazoo-Mississippi-Delta Joint Water Management District (YMD), which is responsible for water issues in the 17-county area that makes up the Delta, is directing resources to reduce the use of water through conservation efforts. The U.S. Geological Survey (USGS) recently completed a regional groundwater flow model of the entire Mississippi embayment, including the Mississippi Delta region, to further our understanding of water availability within the embayment system. This model is being used by the USGS to assist YMD in optimizing their conservation efforts by applying various water-use reduction scenarios, either uniformly throughout the Delta, or in focused areas where there have been large groundwater declines in the MRVA aquifer.

  3. Design of a Representative Low Earth Orbit Satellite to Improve Existing Debris Models

    NASA Technical Reports Server (NTRS)

    Clark, S.; Dietrich, A.; Werremeyer, M.; Fitz-Coy, N.; Liou, J.-C.

    2012-01-01

    This paper summarizes the process and methodologies used in the design of a small-satellite, DebriSat, that represents materials and construction methods used in modern day Low Earth Orbit (LEO) satellites. This satellite will be used in a future hypervelocity impact test with the overall purpose to investigate the physical characteristics of modern LEO satellites after an on-orbit collision. The major ground-based satellite impact experiment used by DoD and NASA in their development of satellite breakup models was conducted in 1992. The target used for that experiment was a Navy Transit satellite (40 cm, 35 kg) fabricated in the 1960 s. Modern satellites are very different in materials and construction techniques from a satellite built 40 years ago. Therefore, there is a need to conduct a similar experiment using a modern target satellite to improve the fidelity of the satellite breakup models. The design of DebriSat will focus on designing and building a next-generation satellite to more accurately portray modern satellites. The design of DebriSat included a comprehensive study of historical LEO satellite designs and missions within the past 15 years for satellites ranging from 10 kg to 5000 kg. This study identified modern trends in hardware, material, and construction practices utilized in recent LEO missions, and helped direct the design of DebriSat.

  4. Framework for a Quantitative Systemic Toxicity Model (FutureToxII)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  5. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  6. Quantitative Model of Systemic Toxicity Using ToxCast and ToxRefDB (SOT)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  7. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    NASA Astrophysics Data System (ADS)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  8. Principles of microRNA Regulation Revealed Through Modeling microRNA Expression Quantitative Trait Loci

    PubMed Central

    Budach, Stefan; Heinig, Matthias; Marsico, Annalisa

    2016-01-01

    Extensive work has been dedicated to study mechanisms of microRNA-mediated gene regulation. However, the transcriptional regulation of microRNAs themselves is far less well understood, due to difficulties determining the transcription start sites of transient primary transcripts. This challenge can be addressed using expression quantitative trait loci (eQTLs) whose regulatory effects represent a natural source of perturbation of cis-regulatory elements. Here we used previously published cis-microRNA-eQTL data for the human GM12878 cell line, promoter predictions, and other functional annotations to determine the relationship between functional elements and microRNA regulation. We built a logistic regression model that classifies microRNA/SNP pairs into eQTLs or non-eQTLs with 85% accuracy; shows microRNA-eQTL enrichment for microRNA precursors, promoters, enhancers, and transcription factor binding sites; and depletion for repressed chromatin. Interestingly, although there is a large overlap between microRNA eQTLs and messenger RNA eQTLs of host genes, 74% of these shared eQTLs affect microRNA and host expression independently. Considering microRNA-only eQTLs we find a significant enrichment for intronic promoters, validating the existence of alternative promoters for intragenic microRNAs. Finally, in line with the GM12878 cell line derived from B cells, we find genome-wide association (GWA) variants associated to blood-related traits more likely to be microRNA eQTLs than random GWA and non-GWA variants, aiding the interpretation of GWA results. PMID:27260304

  9. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    NASA Astrophysics Data System (ADS)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2017-03-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  10. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling.

    PubMed

    Dick, Daniel G; Maxwell, Erin E

    2015-07-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'.

  11. 18FDG synthesis and supply: a journey from existing centralized to future decentralized models.

    PubMed

    Uz Zaman, Maseeh; Fatima, Nosheen; Sajjad, Zafar; Zaman, Unaiza; Tahseen, Rabia; Zaman, Areeba

    2014-01-01

    Positron emission tomography (PET) as the functional component of current hybrid imaging (like PET/ CT or PET/MRI) seems to dominate the horizon of medical imaging in coming decades. 18Flourodeoxyglucose (18FDG) is the most commonly used probe in oncology and also in cardiology and neurology around the globe. However, the major capital cost and exorbitant running expenditure of low to medium energy cyclotrons (about 20 MeV) and radiochemistry units are the seminal reasons of low number of cyclotrons but mushroom growth pattern of PET scanners. This fact and longer half-life of 18F (110 minutes) have paved the path of a centralized model in which 18FDG is produced by commercial PET radiopharmacies and the finished product (multi-dose vial with tungsten shielding) is dispensed to customers having only PET scanners. This indeed reduced the cost but has limitations of dependence upon timely arrival of daily shipments as delay caused by any reason results in cancellation or rescheduling of the PET procedures. In recent years, industry and academia have taken a step forward by producing low energy, table top cyclotrons with compact and automated radiochemistry units (Lab- on-Chip). This decentralized strategy enables the users to produce on-demand doses of PET probe themselves at reasonably low cost using an automated and user-friendly technology. This technological development would indeed provide a real impetus to the availability of complete set up of PET based molecular imaging at an affordable cost to the developing countries.

  12. Models and methods for quantitative analysis of surface-enhanced Raman spectra.

    PubMed

    Li, Shuo; Nyagilo, James O; Dave, Digant P; Gao, Jean

    2014-03-01

    The quantitative analysis of surface-enhanced Raman spectra using scattering nanoparticles has shown the potential and promising applications in in vivo molecular imaging. The diverse approaches have been used for quantitative analysis of Raman pectra information, which can be categorized as direct classical least squares models, full spectrum multivariate calibration models, selected multivariate calibration models, and latent variable regression (LVR) models. However, the working principle of these methods in the Raman spectra application remains poorly understood and a clear picture of the overall performance of each model is missing. Based on the characteristics of the Raman spectra, in this paper, we first provide the theoretical foundation of the aforementioned commonly used models and show why the LVR models are more suitable for quantitative analysis of the Raman spectra. Then, we demonstrate the fundamental connections and differences between different LVR methods, such as principal component regression, reduced-rank regression, partial least square regression (PLSR), canonical correlation regression, and robust canonical analysis, by comparing their objective functions and constraints.We further prove that PLSR is literally a blend of multivariate calibration and feature extraction model that relates concentrations of nanotags to spectrum intensity. These features (a.k.a. latent variables) satisfy two purposes: the best representation of the predictor matrix and correlation with the response matrix. These illustrations give a new understanding of the traditional PLSR and explain why PLSR exceeds other methods in quantitative analysis of the Raman spectra problem. In the end, all the methods are tested on the Raman spectra datasets with different evaluation criteria to evaluate their performance.

  13. Evaluation of Modeled and Measured Energy Savings in Existing All Electric Public Housing in the Pacific Northwest

    SciTech Connect

    Gordon, Andrew; Lubliner, Michael; Howard, Luke; Kunkle, Rick; Salzberg, Emily

    2014-04-01

    This project analyzes the cost effectiveness of energy savings measures installed by a large public housing authority in Salishan, a community in Tacoma Washington. Research focuses on the modeled and measured energy usage of the first six phases of construction, and compares the energy usage of those phases to phase 7. Market-ready energy solutions were also evaluated to improve the efficiency of affordable housing for new and existing (built since 2001) affordable housing in the marine climate of Washington State.

  14. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum

    PubMed Central

    Milic, Natasa M.; Trajkovic, Goran Z.; Bukumiric, Zoran M.; Cirkovic, Andja; Nikolic, Ivan M.; Milin, Jelena S.; Milic, Nikola V.; Savic, Marko D.; Corac, Aleksandar M.; Marinkovic, Jelena M.; Stanisavljevic, Dejana M.

    2016-01-01

    Background Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. Methods This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013–14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Results Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001). Conclusion This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient

  15. Human judgment vs. quantitative models for the management of ecological resources.

    PubMed

    Holden, Matthew H; Ellner, Stephen P

    2016-07-01

    Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed

  16. Quantitative Genetics and Functional–Structural Plant Growth Models: Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization

    PubMed Central

    Letort, Véronique; Mahe, Paul; Cournède, Paul-Henry; de Reffye, Philippe; Courtois, Brigitte

    2008-01-01

    Background and Aims Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype × environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional–structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods The GREENLAB model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings of the species-specific parameters of the model. The QTL Cartographer software was used to study QTL detection of simulated plant traits. A genetic algorithm was implemented to define the ideotype for yield maximization based on the model parameters and the associated allelic combination. Key Results and Conclusions By keeping the environmental factors constant and using a virtual population with a large number of individuals generated by a Mendelian genetic model, results for an ideal case could be simulated. Virtual QTL detection was compared in the case of phenotypic traits – such as cob weight – and when traits were model parameters, and was found to be more accurate in the latter case. The practical interest of this approach is illustrated by calculating the parameters (and the corresponding genotype) associated with yield optimization of a GREENLAB maize model. The paper discusses the potentials of GREENLAB to represent environment × genotype

  17. Testing the influence of vertical, pre-existing joints on normal faulting using analogue and 3D discrete element models (DEM)

    NASA Astrophysics Data System (ADS)

    Kettermann, Michael; von Hagke, Christoph; Virgo, Simon; Urai, Janos L.

    2015-04-01

    Brittle rocks are often affected by different generations of fractures that influence each other. We study pre-existing vertical joints followed by a faulting event. Understanding the effect of these interactions on fracture/fault geometries as well as the development of dilatancy and the formation of cavities as potential fluid pathways is crucial for reservoir quality prediction and production. Our approach combines scaled analogue and numerical modeling. Using cohesive hemihydrate powder allows us to create open fractures prior to faulting. The physical models are reproduced using the ESyS-Particle discrete element Modeling Software (DEM), and different parameters are investigated. Analogue models were carried out in a manually driven deformation box (30x28x20 cm) with a 60° dipping pre-defined basement fault and 4.5 cm of displacement. To produce open joints prior to faulting, sheets of paper were mounted in the box to a depth of 5 cm at a spacing of 2.5 cm. Powder was then sieved into the box, embedding the paper almost entirely (column height of 19 cm), and the paper was removed. We tested the influence of different angles between the strike of the basement fault and the joint set (0°, 4°, 8°, 12°, 16°, 20°, and 25°). During deformation we captured structural information by time-lapse photography that allows particle imaging velocimetry analyses (PIV) to detect localized deformation at every increment of displacement. Post-mortem photogrammetry preserves the final 3-dimensional structure of the fault zone. We observe that no faults or fractures occur parallel to basement-fault strike. Secondary fractures are mostly oriented normal to primary joints. At the final stage of the experiments we analyzed semi-quantitatively the number of connected joints, number of secondary fractures, degree of segmentation (i.e. number of joints accommodating strain), damage zone width, and the map-view area fraction of open gaps. Whereas the area fraction does not change

  18. A Computational Study of Cavitation Model Validity Using a New Quantitative Criterion

    NASA Astrophysics Data System (ADS)

    Hagar Alm, El-Din; Zhang, Yu-Sheng; Medhat, Elkelawy

    2012-06-01

    The predictive capability of two different numerical cavitation models accounting for the onset and development of cavitation inside real-sized diesel nozzle holes is assessed on the basis of the referenced experimental data. The calculations performed indicate that for the same model assumptions, numerical implementation, discretization scheme, and turbulence grid resolution model, the predictions for differently applied physical cavitation submodels are phenomenologically distinct from each other. We present a comparison by applying a new criterion for the quantitative comparison between the results obtained from both cavitation models.

  19. PHYSIOLOGICALLY-BASED PHARMACOKINETIC ( PBPK ) MODEL FOR METHYL TERTIARY BUTYL ETHER ( MTBE ): A REVIEW OF EXISTING MODELS

    EPA Science Inventory

    MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...

  20. Quantitative retinal blood flow mapping from fluorescein videoangiography using tracer kinetic modeling.

    PubMed

    Tichauer, Kenneth M; Guthrie, Micah; Hones, Logan; Sinha, Lagnojita; St Lawrence, Keith; Kang-Mieler, Jennifer J

    2015-05-15

    Abnormal retinal blood flow (RBF) has been associated with numerous retinal pathologies, yet existing methods for measuring RBF predominantly provide only relative measures of blood flow and are unable to quantify volumetric blood flow, which could allow direct patient to patient comparison. This work presents a methodology based on linear systems theory and an image-based arterial input function to quantitatively map volumetric blood flow from standard fluorescein videoangiography data, and is therefore directly translatable to the clinic. Application of the approach to fluorescein retinal videoangiography in rats (4 control, 4 diabetic) demonstrated significantly higher RBF in 4-5 week diabetic rats as expected from the literature.

  1. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling

    PubMed Central

    Dick, Daniel G.; Maxwell, Erin E.

    2015-01-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the ‘migration model’. PMID:26156130

  2. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  3. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  4. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    SciTech Connect

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  5. Quantitative 3D investigation of Neuronal network in mouse spinal cord model.

    PubMed

    Bukreeva, I; Campi, G; Fratini, M; Spanò, R; Bucci, D; Battaglia, G; Giove, F; Bravin, A; Uccelli, A; Venturi, C; Mastrogiacomo, M; Cedola, A

    2017-01-23

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a "database" for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  6. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    PubMed Central

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies. PMID:28112212

  7. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    NASA Astrophysics Data System (ADS)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  8. From Tls Point Clouds to 3d Models of Trees: a Comparison of Existing Algorithms for 3d Tree Reconstruction

    NASA Astrophysics Data System (ADS)

    Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.

    2017-02-01

    3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS) data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  9. The Role of Pre-Existing Disturbances in the Effect of Marine Reserves on Coastal Ecosystems: A Modelling Approach

    PubMed Central

    Savina, Marie; Condie, Scott A.; Fulton, Elizabeth A.

    2013-01-01

    We have used an end-to-end ecosystem model to explore responses over 30 years to coastal no-take reserves covering up to 6% of the fifty thousand square kilometres of continental shelf and slope off the coast of New South Wales (Australia). The model is based on the Atlantis framework, which includes a deterministic, spatially resolved three-dimensional biophysical model that tracks nutrient flows through key biological groups, as well as extraction by a range of fisheries. The model results support previous empirical studies in finding clear benefits of reserves to top predators such as sharks and rays throughout the region, while also showing how many of their major prey groups (including commercial species) experienced significant declines. It was found that the net impact of marine reserves was dependent on the pre-existing levels of disturbance (i.e. fishing pressure), and to a lesser extent on the size of the marine reserves. The high fishing scenario resulted in a strongly perturbed system, where the introduction of marine reserves had clear and mostly direct effects on biomass and functional biodiversity. However, under the lower fishing pressure scenario, the introduction of marine reserves caused both direct positive effects, mainly on shark groups, and indirect negative effects through trophic cascades. Our study illustrates the need to carefully align the design and implementation of marine reserves with policy and management objectives. Trade-offs may exist not only between fisheries and conservation objectives, but also among conservation objectives. PMID:23593432

  10. Quantitative evaluation by measurement and modeling of the variations in dose distributions deposited in mobile targets.

    PubMed

    Ali, Imad; Alsbou, Nesreen; Taguenang, Jean-Michel; Ahmad, Salahuddin

    2017-03-03

    The objective of this study is to quantitatively evaluate variations of dose distributions deposited in mobile target by measurement and modeling. The effects of variation in dose distribution induced by motion on tumor dose coverage and sparing of normal tissues were investigated quantitatively. The dose distributions with motion artifacts were modeled considering different motion patterns that include (a) motion with constant speed and (b) sinusoidal motion. The model predictions of the dose distributions with motion artifacts were verified with measurement where the dose distributions from various plans that included three-dimensional conformal and intensity-modulated fields were measured with a multiple-diode-array detector (MapCheck2), which was mounted on a mobile platform that moves with adjustable motion parameters. For each plan, the dose distributions were then measured with MapCHECK2 using different motion amplitudes from 0-25 mm. In addition, mathematical modeling was developed to predict the variations in the dose distributions and their dependence on the motion parameters that included amplitude, frequency and phase for sinusoidal motions. The dose distributions varied with motion and depended on the motion pattern particularly the sinusoidal motion, which spread out along the direction of motion. Study results showed that in the dose region between isocenter and the 50% isodose line, the dose profile decreased with increase of the motion amplitude. As the range of motion became larger than the field length along the direction of motion, the dose profiles changes overall including the central axis dose and 50% isodose line. If the total dose was delivered over a time much longer than the periodic time of motion, variations in motion frequency and phase do not affect the dose profiles. As a result, the motion dose modeling developed in this study provided quantitative characterization of variation in the dose distributions induced by motion, which

  11. Comparison of quantitative structure-activity relationship model performances on carboquinone derivatives.

    PubMed

    Bolboacă, Sorana-Daniela; Jäntschi, Lorentz

    2009-10-14

    Quantitative structure-activity relationship (qSAR) models are used to understand how the structure and activity of chemical compounds relate. In the present study, 37 carboquinone derivatives were evaluated and two different qSAR models were developed using members of the Molecular Descriptors Family (MDF) and the Molecular Descriptors Family on Vertices (MDFV). The usual parameters of regression models and the following estimators were defined and calculated in order to analyze the validity and to compare the models: Akaike's information criteria (three parameters), Schwarz (or Bayesian) information criterion, Amemiya prediction criterion, Hannan-Quinn criterion, Kubinyi function, Steiger's Z test, and Akaike's weights. The MDF and MDFV models proved to have the same estimation ability of the goodness-of-fit according to Steiger's Z test. The MDFV model proved to be the best model for the considered carboquinone derivatives according to the defined information and prediction criteria, Kubinyi function, and Akaike's weights.

  12. Quantitative Structure‐activity Relationship (QSAR) Models for Docking Score Correction

    PubMed Central

    Yamasaki, Satoshi; Yasumatsu, Isao; Takeuchi, Koh; Kurosawa, Takashi; Nakamura, Haruki

    2016-01-01

    Abstract In order to improve docking score correction, we developed several structure‐based quantitative structure activity relationship (QSAR) models by protein‐drug docking simulations and applied these models to public affinity data. The prediction models used descriptor‐based regression, and the compound descriptor was a set of docking scores against multiple (∼600) proteins including nontargets. The binding free energy that corresponded to the docking score was approximated by a weighted average of docking scores for multiple proteins, and we tried linear, weighted linear and polynomial regression models considering the compound similarities. In addition, we tried a combination of these regression models for individual data sets such as IC50, Ki, and %inhibition values. The cross‐validation results showed that the weighted linear model was more accurate than the simple linear regression model. Thus, the QSAR approaches based on the affinity data of public databases should improve docking scores. PMID:28001004

  13. [Study on temperature correctional models of quantitative analysis with near infrared spectroscopy].

    PubMed

    Zhang, Jun; Chen, Hua-cai; Chen, Xing-dan

    2005-06-01

    Effect of enviroment temperature on near infrared spectroscopic quantitative analysis was studied. The temperature correction model was calibrated with 45 wheat samples at different environment temperaturs and with the temperature as an external variable. The constant temperature model was calibated with 45 wheat samples at the same temperature. The predicted results of two models for the protein contents of wheat samples at different temperatures were compared. The results showed that the mean standard error of prediction (SEP) of the temperature correction model was 0.333, but the SEP of constant temperature (22 degrees C) model increased as the temperature difference enlarged, and the SEP is up to 0.602 when using this model at 4 degrees C. It was suggested that the temperature correctional model improves the analysis precision.

  14. Quantitative DFT modeling of the enantiomeric excess for dioxirane-catalyzed epoxidations

    PubMed Central

    Schneebeli, Severin T.; Hall, Michelle Lynn

    2009-01-01

    Herein we report the first fully quantum mechanical study of enantioselectivity for a large dataset. We show that transition state modeling at the UB3LYP-DFT/6-31G* level of theory can accurately model enantioselectivity for various dioxirane-catalyzed asymmetric epoxidations. All the synthetically useful high selectivities are successfully “predicted” by this method. Our results hint at the utility of this method to further model other asymmetric reactions and facilitate the discovery process for the experimental organic chemist. Our work suggests the possibility of using computational methods not simply to explain organic phenomena, but also to predict them quantitatively. PMID:19243187

  15. Quantitative explanation of circuit experiments and real traffic using the optimal velocity model

    NASA Astrophysics Data System (ADS)

    Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi

    2016-04-01

    We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.

  16. Experimentally validated quantitative linear model for the device physics of elastomeric microfluidic valves

    NASA Astrophysics Data System (ADS)

    Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French

    2007-03-01

    A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.

  17. Modelling Activities In Kinematics Understanding quantitative relations with the contribution of qualitative reasoning

    NASA Astrophysics Data System (ADS)

    Orfanos, Stelios

    2010-01-01

    In Greek traditional teaching a lot of significant concepts are introduced with a sequence that does not provide the students with all the necessary information required to comprehend. We consider that understanding concepts and the relations among them is greatly facilitated by the use of modelling tools, taking into account that the modelling process forces students to change their vague, imprecise ideas into explicit causal relationships. It is not uncommon to find students who are able to solve problems by using complicated relations without getting a qualitative and in-depth grip on them. Researchers have already shown that students often have a formal mathematical and physical knowledge without a qualitative understanding of basic concepts and relations." The aim of this communication is to present some of the results of our investigation into modelling activities related to kinematical concepts. For this purpose, we have used ModellingSpace, an environment that was especially designed to allow students from eleven to seventeen years old to express their ideas and gradually develop them. The ModellingSpace enables students to build their own models and offers the choice of observing directly simulations of real objects and/or all the other alternative forms of representations (tables of values, graphic representations and bar-charts). The students -in order to answer the questions- formulate hypotheses, they create models, they compare their hypotheses with the representations of their models and they modify or create other models when their hypotheses did not agree with the representations. In traditional ways of teaching, students are educated to utilize formulas as the most important strategy. Several times the students recall formulas in order to utilize them, without getting an in-depth understanding on them. Students commonly use the quantitative type of reasoning, since it is primarily used in teaching, although it may not be fully understood by them

  18. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    PubMed

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  19. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    PubMed

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  20. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    PubMed

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  1. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    PubMed Central

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  2. A Quantitative Quasispecies Theory-Based Model of Virus Escape Mutation Under Immune Selection

    DTIC Science & Technology

    2012-01-01

    A quantitative quasispecies theory-based model of virus escape mutation under immune selection Hyung-June Woo and Jaques Reifman1 Biotechnology High...Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although...response. The virus quasispecies dynamics are explicitly repre- sented by mutations in the combined sequence space of a set of epitopes within the viral

  3. Existence and asymptotics of traveling wave fronts for a delayed nonlocal diffusion model with a quiescent stage

    NASA Astrophysics Data System (ADS)

    Zhou, Kai; Lin, Yuan; Wang, Qi-Ru

    2013-11-01

    In this paper, we propose a delayed nonlocal diffusion model with a quiescent stage and study its dynamics. By using Schauder's fixed point theorem and upper-lower solution method, we establish the existence of traveling wave fronts for speed c⩾c∗(τ), where c∗(τ) is a critical value. With the method of Carr and Chmaj (PAMS, 2004), we discuss the asymptotic behavior of traveling wave fronts and then get the nonexistence of traveling wave fronts for c

  4. Existence, multiplicity and stability of endemic states for an age-structured S-I epidemic model.

    PubMed

    Breda, D; Visetti, D

    2012-01-01

    We study an S-I type epidemic model in an age-structured population, with mortality due to the disease. A threshold quantity is found that controls the stability of the disease-free equilibrium and guarantees the existence of an endemic equilibrium. We obtain conditions on the age-dependence of the susceptibility to infection that imply the uniqueness of the endemic equilibrium. An example with two endemic equilibria is shown. Finally, we analyse numerically how the stability of the endemic equilibrium is affected by the extra-mortality and by the possible periodicities induced by the demographic age-structure.

  5. A quantitative model of human DNA base excision repair. I. Mechanistic insights.

    PubMed

    Sokhansanj, Bahrad A; Rodrigue, Garry R; Fitch, J Patrick; Wilson, David M

    2002-04-15

    Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts considerably less pathway throughput than observed in experimental in vitro assays. This finding, in combination with the effects of pathway cooperativity on model throughput, supports the hypothesis of cooperation during abasic site repair and between the apurinic/apyrimidinic (AP) endonuclease, Ape1, and the 8-oxoguanine DNA glycosylase, Ogg1. The quantitative model also predicts that for 8-oxoguanine and hydrolytic AP site damage, short-patch Polbeta-mediated BER dominates, with minimal switching to the long-patch subpathway. Sensitivity analysis of the model indicates that the Polbeta-catalyzed reactions have the most control over pathway throughput, although other BER reactions contribute to pathway efficiency as well. The studies within represent a first step in a developing effort to create a predictive model for BER cellular capacity.

  6. A quantitative model of human DNA base excision repair. I. mechanistic insights

    PubMed Central

    Sokhansanj, Bahrad A.; Rodrigue, Garry R.; Fitch, J. Patrick; Wilson, David M.

    2002-01-01

    Base excision repair (BER) is a multistep process involving the sequential activity of several proteins that cope with spontaneous and environmentally induced mutagenic and cytotoxic DNA damage. Quantitative kinetic data on single proteins of BER have been used here to develop a mathematical model of the BER pathway. This model was then employed to evaluate mechanistic issues and to determine the sensitivity of pathway throughput to altered enzyme kinetics. Notably, the model predicts considerably less pathway throughput than observed in experimental in vitro assays. This finding, in combination with the effects of pathway cooperativity on model throughput, supports the hypothesis of cooperation during abasic site repair and between the apurinic/apyrimidinic (AP) endonuclease, Ape1, and the 8-oxoguanine DNA glycosylase, Ogg1. The quantitative model also predicts that for 8-oxoguanine and hydrolytic AP site damage, short-patch Polβ-mediated BER dominates, with minimal switching to the long-patch subpathway. Sensitivity analysis of the model indicates that the Polβ-catalyzed reactions have the most control over pathway throughput, although other BER reactions contribute to pathway efficiency as well. The studies within represent a first step in a developing effort to create a predictive model for BER cellular capacity. PMID:11937636

  7. A Davis-Putnam program and its application to finite-order model search: Quasigroup existence problems

    SciTech Connect

    McCune, W.

    1994-09-01

    This document describes the implementation and use of a Davis-Putnam procedure for the propositional satisfiability problem. It also describes code that takes statements in first-order logic with equality and a domain size n then searches for models of size n. The first-order model-searching code transforms the statements into set of propositional clauses such that the first-order statements have a model of size n if and only if the propositional clauses are satisfiable. The propositional set is then given to the Davis-Putnam code; any propositional models that are found can be translated to models of the first-order statements. The first-order model-searching program accepts statements only in a flattened relational clause form without function symbols. Additional code was written to take input statements in the language of OTTER 3.0 and produce the flattened relational form. The program was successfully applied to several open questions on the existence of orthogonal quasigroups.

  8. The complex genetic and molecular basis of a model quantitative trait.

    PubMed

    Linder, Robert A; Seidl, Fabian; Ha, Kimberly; Ehrenreich, Ian M

    2016-01-01

    Quantitative traits are often influenced by many loci with small effects. Identifying most of these loci and resolving them to specific genes or genetic variants is challenging. Yet, achieving such a detailed understanding of quantitative traits is important, as it can improve our knowledge of the genetic and molecular basis of heritable phenotypic variation. In this study, we use a genetic mapping strategy that involves recurrent backcrossing with phenotypic selection to obtain new insights into an ecologically, industrially, and medically relevant quantitative trait-tolerance of oxidative stress, as measured based on resistance to hydrogen peroxide. We examine the genetic basis of hydrogen peroxide resistance in three related yeast crosses and detect 64 distinct genomic loci that likely influence the trait. By precisely resolving or cloning a number of these loci, we demonstrate that a broad spectrum of cellular processes contribute to hydrogen peroxide resistance, including DNA repair, scavenging of reactive oxygen species, stress-induced MAPK signaling, translation, and water transport. Consistent with the complex genetic and molecular basis of hydrogen peroxide resistance, we show two examples where multiple distinct causal genetic variants underlie what appears to be a single locus. Our results improve understanding of the genetic and molecular basis of a highly complex, model quantitative trait.

  9. Evaluation of Lung Metastasis in Mouse Mammary Tumor Models by Quantitative Real-time PCR

    PubMed Central

    Abt, Melissa A.; Grek, Christina L.; Ghatnekar, Gautam S.; Yeh, Elizabeth S.

    2016-01-01

    Metastatic disease is the spread of malignant tumor cells from the primary cancer site to a distant organ and is the primary cause of cancer associated death 1. Common sites of metastatic spread include lung, lymph node, brain, and bone 2. Mechanisms that drive metastasis are intense areas of cancer research. Consequently, effective assays to measure metastatic burden in distant sites of metastasis are instrumental for cancer research. Evaluation of lung metastases in mammary tumor models is generally performed by gross qualitative observation of lung tissue following dissection. Quantitative methods of evaluating metastasis are currently limited to ex vivo and in vivo imaging based techniques that require user defined parameters. Many of these techniques are at the whole organism level rather than the cellular level 3–6. Although newer imaging methods utilizing multi-photon microscopy are able to evaluate metastasis at the cellular level 7, these highly elegant procedures are more suited to evaluating mechanisms of dissemination rather than quantitative assessment of metastatic burden. Here, a simple in vitro method to quantitatively assess metastasis is presented. Using quantitative Real-time PCR (QRT-PCR), tumor cell specific mRNA can be detected within the mouse lung tissue. PMID:26862835

  10. Evaluation of Lung Metastasis in Mouse Mammary Tumor Models by Quantitative Real-time PCR.

    PubMed

    Abt, Melissa A; Grek, Christina L; Ghatnekar, Gautam S; Yeh, Elizabeth S

    2016-01-29

    Metastatic disease is the spread of malignant tumor cells from the primary cancer site to a distant organ and is the primary cause of cancer associated death. Common sites of metastatic spread include lung, lymph node, brain, and bone. Mechanisms that drive metastasis are intense areas of cancer research. Consequently, effective assays to measure metastatic burden in distant sites of metastasis are instrumental for cancer research. Evaluation of lung metastases in mammary tumor models is generally performed by gross qualitative observation of lung tissue following dissection. Quantitative methods of evaluating metastasis are currently limited to ex vivo and in vivo imaging based techniques that require user defined parameters. Many of these techniques are at the whole organism level rather than the cellular level. Although newer imaging methods utilizing multi-photon microscopy are able to evaluate metastasis at the cellular level, these highly elegant procedures are more suited to evaluating mechanisms of dissemination rather than quantitative assessment of metastatic burden. Here, a simple in vitro method to quantitatively assess metastasis is presented. Using quantitative Real-time PCR (QRT-PCR), tumor cell specific mRNA can be detected within the mouse lung tissue.

  11. Quantitative nucleation and growth kinetics of gold nanoparticles via model-assisted dynamic spectroscopic approach.

    PubMed

    Zhou, Yao; Wang, Huixuan; Lin, Wenshuang; Lin, Liqin; Gao, Yixian; Yang, Feng; Du, Mingming; Fang, Weiping; Huang, Jiale; Sun, Daohua; Li, Qingbiao

    2013-10-01

    Lacking of quantitative experimental data and/or kinetic models that could mathematically depict the redox chemistry and the crystallization issue, bottom-to-up formation kinetics of gold nanoparticles (GNPs) remains a challenge. We measured the dynamic regime of GNPs synthesized by l-ascorbic acid (representing a chemical approach) and/or foliar aqueous extract (a biogenic approach) via in situ spectroscopic characterization and established a redox-crystallization model which allows quantitative and separate parameterization of the nucleation and growth processes. The main results were simplified as the following aspects: (I) an efficient approach, i.e., the dynamic in situ spectroscopic characterization assisted with the redox-crystallization model, was established for quantitative analysis of the overall formation kinetics of GNPs in solution; (II) formation of GNPs by the chemical and the biogenic approaches experienced a slow nucleation stage followed by a growth stage which behaved as a mixed-order reaction, and different from the chemical approach, the biogenic method involved heterogeneous nucleation; (III) also, biosynthesis of flaky GNPs was a kinetic-controlled process favored by relatively slow redox chemistry; and (IV) though GNPs formation consists of two aspects, namely the redox chemistry and the crystallization issue, the latter was the rate-determining event that controls the dynamic regime of the whole physicochemical process.

  12. Can we better use existing and emerging computing hardware to embed activity coefficient predictions in complex atmospheric aerosol models?

    NASA Astrophysics Data System (ADS)

    Topping, David; Alibay, Irfan; Ruske, Simon; Hindriksen, Vincent; Noisternig, Michael

    2016-04-01

    To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we show that comprehensive microphysical models from single particle to larger scales can be developed to encompass a complete state-of-the-art knowledge of aerosol chemical and process diversity. We focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method, profiling traditional coding strategies and those that exploit emerging hardware.

  13. Modelling CEC variations versus structural iron reduction levels in dioctahedral smectites. Existing approaches, new data and model refinements.

    PubMed

    Hadi, Jebril; Tournassat, Christophe; Ignatiadis, Ioannis; Greneche, Jean Marc; Charlet, Laurent

    2013-10-01

    A model was developed to describe how the 2:1 layer excess negative charge induced by the reduction of Fe(III) to Fe(II) by sodium dithionite buffered with citrate-bicarbonate is balanced and applied to nontronites. This model is based on new experimental data and extends structural interpretation introduced by a former model [36-38]. The 2:1 layer negative charge increase due to Fe(III) to Fe(II) reduction is balanced by an excess adsorption of cations in the clay interlayers and a specific sorption of H(+) from solution. Prevalence of one compensating mechanism over the other is related to the growing lattice distortion induced by structural Fe(III) reduction. At low reduction levels, cation adsorption dominates and some of the incorporated protons react with structural OH groups, leading to a dehydroxylation of the structure. Starting from a moderate reduction level, other structural changes occur, leading to a reorganisation of the octahedral and tetrahedral lattice: migration or release of cations, intense dehydroxylation and bonding of protons to undersaturated oxygen atoms. Experimental data highlight some particular properties of ferruginous smectites regarding chemical reduction. Contrary to previous assumptions, the negative layer charge of nontronites does not only increase towards a plateau value upon reduction. A peak is observed in the reduction domain. After this peak, the negative layer charge decreases upon extended reduction (>30%). The decrease is so dramatic that the layer charge of highly reduced nontronites can fall below that of its fully oxidised counterpart. Furthermore, the presence of a large amount of tetrahedral Fe seems to promote intense clay structural changes and Fe reducibility. Our newly acquired data clearly show that models currently available in the literature cannot be applied to the whole reduction range of clay structural Fe. Moreover, changes in the model normalising procedure clearly demonstrate that the investigated low

  14. Modeling and mapping of cadmium in soils based on qualitative and quantitative auxiliary variables in a cadmium contaminated area.

    PubMed

    Cao, Shanshan; Lu, Anxiang; Wang, Jihua; Huo, Lili

    2017-02-15

    The aim of this study was to measure the improvement in mapping accuracy of spatial distribution of Cd in soils by using geostatistical methods combined with auxiliary factors, especially qualitative variables. Significant correlations between Cd content and correlation environment variables that are easy to obtain (such as topographic factors, distance to residential area, land use types and soil types) were analyzed systematically and quantitatively. Based on 398 samples collected from a Cd contaminated area (Hunan Province, China), we estimated the spatial distribution of Cd in soils by using spatial interpolation models, including ordinary kriging (OK), and regression kriging (RK) with each auxiliary variable, all quantitative variables (RKWQ) and all auxiliary variables (RKWA). Results showed that mapping with RK was more consistent with the sampling data of the spatial distribution of Cd in the study area than mapping with OK. The performance indicators (smaller mean error, mean absolute error, root mean squared error values and higher relative improvement of RK than OK) indicated that the introduction of auxiliary variables can improve the prediction accuracy of Cd in soils for which the spatial structure could not be well captured by point-based observation (nugget to sill ratio=0.76) and strong relationships existed between variables to be predicted and auxiliary variables. The comparison of RKWA with RKWQ further indicated that the introduction of qualitative variables improved the prediction accuracy, and even weakened the effects of quantitative factors. Furthermore, the significantly different relative improvement with similar R(2) and varying spatial dependence showed that a reasonable choice of auxiliary variables and analysis of spatial structure of regression residuals are equally important to ensure accurate predictions.

  15. Construction of coherent nano quantitative structure-properties relationships (nano-QSPR) models and catastrophe theory.

    PubMed

    Carbó-Dorca, R; Besalú, E

    2011-10-01

    The structure one can associate to coherent nano-quantitative structure-properties relationship (nano-QSPR) models is briefly discussed. Such nano-QSPR model functions are described as possessing three parts: a particle size polynomial; a typical QSPR function; and a special effects function. The expected behaviour of the particle size part is discussed from the point of view of catastrophe theory, in this way providing a plausible general picture about the emergence of new properties of nanoparticles and holographic location of information content.

  16. Impact of Model Uncertainties on Quantitative Analysis of FUV Auroral Images: Peak Production Height

    NASA Technical Reports Server (NTRS)

    Germany, G. A.; Lummerzheim, D.; Parks, G. K.; Brittnacher, M. J.; Spann, James F., Jr.; Richards, Phil G.

    1999-01-01

    We demonstrate that small uncertainties in the modeled height of peak production for FUV emissions can lead to significant uncertainties in the analysis of these sai-ne emissions. In particular, an uncertainty of only 3 km in the peak production height can lead to a 50% uncertainty in the mean auroral energy deduced from the images. This altitude uncertainty is comparable to differences in different auroral deposition models currently used for UVI analysis. Consequently, great care must be taken in quantitative photometric analysis and interpretation of FUV auroral images.

  17. Quantitative structure-activity relationship modeling of the toxicity of organothiophosphate pesticides to Daphnia magna and Cyprinus carpio.

    PubMed

    Zvinavashe, Elton; Du, Tingting; Griff, Tamas; van den Berg, Hans H J; Soffers, Ans E M F; Vervoort, Jacques; Murk, Albertinka J; Rietjens, Ivonne M C M

    2009-06-01

    Within the REACH regulatory framework in the EU, quantitative structure-activity relationships (QSAR) models are expected to help reduce the number of animals used for experimental testing. The objective of this study was to develop QSAR models to describe the acute toxicity of organothiophosphate pesticides to aquatic organisms. Literature data sets for acute toxicity data of organothiophosphates to fish and one data set from experiments with 15 organothiophosphates on Daphniamagna performed in the present study were used to establish QSARs based on quantum mechanically derived molecular descriptors. The logarithm of the octanol/water partition coefficient, logK(ow,) the energy of the lowest unoccupied molecular orbital, E(lumo), and the energy of the highest occupied molecular orbital, E(homo) were used as descriptors. Additionally, it was investigated if toxicity data for the invertebrate D. magna could be used to build a QSAR model to predict toxicity to fish. Suitable QSAR models (0.80model to predict the acute toxicity of organothiophosphates to fish. The three QSAR models were validated either both internally and externally (D. magna) or internally only (carp and D. magna to carp). For each QSAR model, an applicability domain was defined based on the chemical structures and the ranges of the descriptor values of the training set compounds. From the 100196 European Inventory of Existing Commercial Chemical Substances (EINECS), 83 compounds were identified that fit the selection criteria for the QSAR models. For these compounds, using our QSAR models, one can obtain an indication of their toxicity without the need for additional experimental

  18. CSML2SBML: a novel tool for converting quantitative biological pathway models from CSML into SBML.

    PubMed

    Li, Chen; Nagasaki, Masao; Ikeda, Emi; Sekiya, Yayoi; Miyano, Satoru

    2014-07-01

    CSML and SBML are XML-based model definition standards which are developed with the aim of creating exchange formats for modeling, visualizing and simulating biological pathways. In this article we report a release of a format convertor for quantitative pathway models, namely CSML2SBML. It translates models encoded by CSML into SBML without loss of structural and kinetic information. The simulation and parameter estimation of the resulting SBML model can be carried out with compliant tool CellDesigner for further analysis. The convertor is based on the standards CSML version 3.0 and SBML Level 2 Version 4. In our experiments, 11 out of 15 pathway models in CSML model repository and 228 models in Macrophage Pathway Knowledgebase (MACPAK) are successfully converted to SBML models. The consistency of the resulting model is validated by libSBML Consistency Check of CellDesigner. Furthermore, the converted SBML model assigned with the kinetic parameters translated from CSML model can reproduce the same dynamics with CellDesigner as CSML one running on Cell Illustrator. CSML2SBML, along with its instructions and examples for use are available at http://csml2sbml.csml.org.

  19. D-Factor: A Quantitative Model of Application Slow-Down in Multi-Resource Shared Systems

    SciTech Connect

    Lim, Seung-Hwan; Huh, Jae-Seok; Kim, Youngjae; Shipman, Galen M; Das, Chita

    2012-01-01

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price - resource contention among jobs increases job completion time. In this paper, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We also show that the model can be integrated with an existing on-line scheduler to minimize the makespan of workloads.

  20. Murine model of disseminated fusariosis: evaluation of the fungal burden by traditional CFU and quantitative PCR.

    PubMed

    González, Gloria M; Márquez, Jazmín; Treviño-Rangel, Rogelio de J; Palma-Nicolás, José P; Garza-González, Elvira; Ceceñas, Luis A; Gerardo González, J

    2013-10-01

    Systemic disease is the most severe clinical form of fusariosis, and the treatment involves a challenge due to the refractory response to antifungals. Treatment for murine Fusarium solani infection has been described in models that employ CFU quantitation in organs as a parameter of therapeutic efficacy. However, CFU counts do not precisely reproduce the amount of cells for filamentous fungi such as F. solani. In this study, we developed a murine model of disseminated fusariosis and compared the fungal burden with two methods: CFU and quantitative PCR. ICR and BALB/c mice received an intravenous injection of 1 × 10(7) conidia of F. solani per mouse. On days 2, 5, 7, and 9, mice from each mice strain were killed. The spleen and kidneys of each animal were removed and evaluated by qPCR and CFU determinations. Results from CFU assay indicated that the spleen and kidneys had almost the same fungal burden in both BALB/c and ICR mice during the days of the evaluation. In the qPCR assay, the spleen and kidney of each mouse strain had increased fungal burden in each determination throughout the entire experiment. The fungal load determined by the qPCR assay was significantly greater than that determined from CFU measurements of tissue. qPCR could be considered as a tool for quantitative evaluation of fungal burden in experimental disseminated F. solani infection.

  1. Model for quantitative tip-enhanced spectroscopy and the extraction of nanoscale-resolved optical constants

    NASA Astrophysics Data System (ADS)

    McLeod, Alexander S.; Kelly, P.; Goldflam, M. D.; Gainsforth, Z.; Westphal, A. J.; Dominguez, Gerardo; Thiemens, Mark H.; Fogler, Michael M.; Basov, D. N.

    2014-08-01

    Near-field infrared spectroscopy by elastic scattering of light from a probe tip resolves optical contrasts in materials at dramatically subwavelength scales across a broad energy range, with the demonstrated capacity for chemical identification at the nanoscale. However, current models of probe-sample near-field interactions still cannot provide a sufficiently quantitatively interpretation of measured near-field contrasts, especially in the case of materials supporting strong surface phonons. We present a model of near-field spectroscopy derived from basic principles and verified by finite-element simulations, demonstrating superb predictive agreement both with tunable quantum cascade laser near-field spectroscopy of SiO2 thin films and with newly presented nanoscale Fourier transform infrared (nanoFTIR) spectroscopy of crystalline SiC. We discuss the role of probe geometry, field retardation, and surface mode dispersion in shaping the measured near-field response. This treatment enables a route to quantitatively determine nanoresolved optical constants, as we demonstrate by inverting newly presented nanoFTIR spectra of an SiO2 thin film into the frequency dependent dielectric function of its mid-infrared optical phonon. Our formalism further enables tip-enhanced spectroscopy as a potent diagnostic tool for quantitative nanoscale spectroscopy.

  2. Extraction, separation and quantitative structure-retention relationship modeling of essential oils in three herbs.

    PubMed

    Wei, Yuhui; Xi, Lili; Chen, Dongxia; Wu, Xin'an; Liu, Huanxiang; Yao, Xiaojun

    2010-07-01

    The essential oils extracted from three kinds of herbs were separated by a 5% phenylmethyl silicone (DB-5MS) bonded phase fused-silica capillary column and identified by MS. Seventy-four of the compounds identified were selected as origin data, and their chemical structure and gas chromatographic retention times (RT) were performed to build a quantitative structure-retention relationship model by genetic algorithm and multiple linear regressions analysis. The predictive ability of the model was verified by internal validation (leave-one-out, fivefold, cross-validation and Y-scrambling). As for external validation, the model was also applied to predict the gas chromatographic RT of the 14 volatile compounds not used for model development from essential oil of Radix angelicae sinensis. The applicability domain was checked by the leverage approach to verify prediction reliability. The results obtained using several validations indicated that the best quantitative structure-retention relationship model was robust and satisfactory, could provide a feasible and effective tool for predicting the gas chromatographic RT of volatile compounds and could be also applied to help in identifying the compound with the same gas chromatographic RT.

  3. Downscaling SSPs in the GBM Delta - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    NASA Astrophysics Data System (ADS)

    Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila

    2016-04-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  4. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    PubMed

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article.

  5. Downscaling SSPs in Bangladesh - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    NASA Astrophysics Data System (ADS)

    Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.

    2015-12-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  6. Mechanical Model Analysis for Quantitative Evaluation of Liver Fibrosis Based on Ultrasound Tissue Elasticity Imaging

    NASA Astrophysics Data System (ADS)

    Shiina, Tsuyoshi; Maki, Tomonori; Yamakawa, Makoto; Mitake, Tsuyoshi; Kudo, Masatoshi; Fujimoto, Kenji

    2012-07-01

    Precise evaluation of the stage of chronic hepatitis C with respect to fibrosis has become an important issue to prevent the occurrence of cirrhosis and to initiate appropriate therapeutic intervention such as viral eradication using interferon. Ultrasound tissue elasticity imaging, i.e., elastography can visualize tissue hardness/softness, and its clinical usefulness has been studied to detect and evaluate tumors. We have recently reported that the texture of elasticity image changes as fibrosis progresses. To evaluate fibrosis progression quantitatively on the basis of ultrasound tissue elasticity imaging, we introduced a mechanical model of fibrosis progression and simulated the process by which hepatic fibrosis affects elasticity images and compared the results with those clinical data analysis. As a result, it was confirmed that even in diffuse diseases like chronic hepatitis, the patterns of elasticity images are related to fibrous structural changes caused by hepatic disease and can be used to derive features for quantitative evaluation of fibrosis stage.

  7. Evolution of a fold-thrust belt deforming a unit with pre-existing linear asperities: Insights from analog models

    NASA Astrophysics Data System (ADS)

    Burberry, Caroline M.; Swiatlowski, Jerlyn L.

    2016-06-01

    Heterogeneity, whether geometric or rheologic, in crustal material undergoing compression affects the geometry of the structures produced. This study documents the thrust fault geometries produced when discrete linear asperities are introduced into an analog model, scaled to represent bulk upper crustal properties, and compressed. Varying obliquities of the asperities are used, relative to the imposed compression, and the resultant development of thrust fault traces and branch lines in map view is tracked. Once the model runs are completed, cross-sections are created and analyzed. The models show that asperities confined to the base layer promote the clustering of branch lines in the surface thrusts. Strong clustering in branch lines is also noted where several asperities are in close proximity or cross. Slight reverse-sense reactivation of asperities cut through the sedimentary sequence is noted in cross-section, where the asperity and the subsequent thrust belt interact. The model results are comparable to the situation in the Dinaric Alps, where pre-existing faults to the SW of the NE Adriatic Fault Zone contribute to the clustering of branch lines developed in the surface fold-thrust belt. These results can therefore be used to evaluate the evolution of other basement-involved fold-thrust belts worldwide.

  8. Quantitative characterization of spurious numerical oscillations in 48 CMIP5 models

    NASA Astrophysics Data System (ADS)

    Geil, Kerrie L.; Zeng, Xubin

    2015-06-01

    Spurious numerical oscillations (SNOs) (e.g., Gibbs oscillations) can appear as unrealistic spatial waves near discontinuities or sharp gradients in global model fields (e.g., orography) and have been a known problem in global models for decades. Multiple methods of oscillation reduction exist; consequently, the oscillations are presumed small in modern climate models and hence are rarely addressed in recent literature. Here we use two metrics to quantify SNOs in 13 variables from 48 Coupled Model Intercomparison Project Phase 5 models along a Pacific ocean transect near the Andes. Results show that 48% of nonspectral models and 95% of spectral models have at least one variable with SNO amplitude as large as, or greater than, atmospheric interannual variability. The impact of SNOs on climate simulations should be thoroughly evaluated and further efforts to substantially reduce SNOs in climate models are urgently needed.

  9. Linking Antisocial Behavior, Substance Use, and Personality: An Integrative Quantitative Model of the Adult Externalizing Spectrum

    PubMed Central

    Krueger, Robert F.; Markon, Kristian E.; Patrick, Christopher J.; Benning, Stephen D.; Kramer, Mark D.

    2008-01-01

    Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena. PMID:18020714

  10. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models

    PubMed Central

    Rieger, TR; Musante, CJ

    2016-01-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777

  11. Linking antisocial behavior, substance use, and personality: an integrative quantitative model of the adult externalizing spectrum.

    PubMed

    Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Benning, Stephen D; Kramer, Mark D

    2007-11-01

    Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena.

  12. The use of mode of action information in risk assessment: quantitative key events/dose-response framework for modeling the dose-response for key events.

    PubMed

    Simon, Ted W; Simons, S Stoney; Preston, R Julian; Boobis, Alan R; Cohen, Samuel M; Doerrer, Nancy G; Fenner-Crisp, Penelope A; McMullin, Tami S; McQueen, Charlene A; Rowlands, J Craig

    2014-08-01

    The HESI RISK21 project formed the Dose-Response/Mode-of-Action Subteam to develop strategies for using all available data (in vitro, in vivo, and in silico) to advance the next-generation of chemical risk assessments. A goal of the Subteam is to enhance the existing Mode of Action/Human Relevance Framework and Key Events/Dose Response Framework (KEDRF) to make the best use of quantitative dose-response and timing information for Key Events (KEs). The resulting Quantitative Key Events/Dose-Response Framework (Q-KEDRF) provides a structured quantitative approach for systematic examination of the dose-response and timing of KEs resulting from a dose of a bioactive agent that causes a potential adverse outcome. Two concepts are described as aids to increasing the understanding of mode of action-Associative Events and Modulating Factors. These concepts are illustrated in two case studies; 1) cholinesterase inhibition by the pesticide chlorpyrifos, which illustrates the necessity of considering quantitative dose-response information when assessing the effect of a Modulating Factor, that is, enzyme polymorphisms in humans, and 2) estrogen-induced uterotrophic responses in rodents, which demonstrate how quantitative dose-response modeling for KE, the understanding of temporal relationships between KEs and a counterfactual examination of hypothesized KEs can determine whether they are Associative Events or true KEs.

  13. The quantitative genetics of indirect genetic effects: a selective review of modelling issues.

    PubMed

    Bijma, P

    2014-01-01

    Indirect genetic effects (IGE) occur when the genotype of an individual affects the phenotypic trait value of another conspecific individual. IGEs can have profound effects on both the magnitude and the direction of response to selection. Models of inheritance and response to selection in traits subject to IGEs have been developed within two frameworks; a trait-based framework in which IGEs are specified as a direct consequence of individual trait values, and a variance-component framework in which phenotypic variance is decomposed into a direct and an indirect additive genetic component. This work is a selective review of the quantitative genetics of traits affected by IGEs, with a focus on modelling, estimation and interpretation issues. It includes a discussion on variance-component vs trait-based models of IGEs, a review of issues related to the estimation of IGEs from field data, including the estimation of the interaction coefficient Ψ (psi), and a discussion on the relevance of IGEs for response to selection in cases where the strength of interaction varies among pairs of individuals. An investigation of the trait-based model shows that the interaction coefficient Ψ may deviate considerably from the corresponding regression coefficient when feedback occurs. The increasing research effort devoted to IGEs suggests that they are a widespread phenomenon, probably particularly in natural populations and plants. Further work in this field should considerably broaden our understanding of the quantitative genetics of inheritance and response to selection in relation to the social organisation of populations.

  14. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model

    PubMed Central

    Yoshioka, S.; Matsuhana, B.; Tanaka, S.; Inouye, Y.; Oshima, N.; Kinoshita, S.

    2011-01-01

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model. PMID:20554565

  15. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model.

    PubMed

    Yoshioka, S; Matsuhana, B; Tanaka, S; Inouye, Y; Oshima, N; Kinoshita, S

    2011-01-06

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model.

  16. Curcumin labels amyloid pathology in vivo, disrupts existing plaques, and partially restores distorted neurites in an Alzheimer mouse model.

    PubMed

    Garcia-Alloza, M; Borrelli, L A; Rozkalne, A; Hyman, B T; Bacskai, B J

    2007-08-01

    Alzheimer's disease (AD) is characterized by senile plaques and neurodegeneration although the neurotoxic mechanisms have not been completely elucidated. It is clear that both oxidative stress and inflammation play an important role in the illness. The compound curcumin, with a broad spectrum of anti-oxidant, anti-inflammatory, and anti-fibrilogenic activities may represent a promising approach for preventing or treating AD. Curcumin is a small fluorescent compound that binds to amyloid deposits. In the present work we used in vivo multiphoton microscopy (MPM) to demonstrate that curcumin crosses the blood-brain barrier and labels senile plaques and cerebrovascular amyloid angiopathy (CAA) in APPswe/PS1dE9 mice. Moreover, systemic treatment of mice with curcumin for 7 days clears and reduces existing plaques, as monitored with longitudinal imaging, suggesting a potent disaggregation effect. Curcumin also led to a limited, but significant reversal of structural changes in dystrophic dendrites, including abnormal curvature and dystrophy size. Together, these data suggest that curcumin reverses existing amyloid pathology and associated neurotoxicity in a mouse model of AD. This approach could lead to more effective clinical therapies for the prevention of oxidative stress, inflammation and neurotoxicity associated with AD.

  17. Quantitative analysis of anaerobic oxidation of methane (AOM) in marine sediments: A modeling perspective

    NASA Astrophysics Data System (ADS)

    Regnier, P.; Dale, A. W.; Arndt, S.; LaRowe, D. E.; Mogollón, J.; Van Cappellen, P.

    2011-05-01

    Recent developments in the quantitative modeling of methane dynamics and anaerobic oxidation of methane (AOM) in marine sediments are critically reviewed. The first part of the review begins with a comparison of alternative kinetic models for AOM. The roles of bioenergetic limitations, intermediate compounds and biomass growth are highlighted. Next, the key transport mechanisms in multi-phase sedimentary environments affecting AOM and methane fluxes are briefly treated, while attention is also given to additional controls on methane and sulfate turnover, including organic matter mineralization, sulfur cycling and methane phase transitions. In the second part of the review, the structure, forcing functions and parameterization of published models of AOM in sediments are analyzed. The six-orders-of-magnitude range in rate constants reported for the widely used bimolecular rate law for AOM emphasizes the limited transferability of this simple kinetic model and, hence, the need for more comprehensive descriptions of the AOM reaction system. The derivation and implementation of more complete reaction models, however, are limited by the availability of observational data. In this context, we attempt to rank the relative benefits of potential experimental measurements that should help to better constrain AOM models. The last part of the review presents a compilation of reported depth-integrated AOM rates (ΣAOM). These rates reveal the extreme variability of ΣAOM in marine sediments. The model results are further used to derive quantitative relationships between ΣAOM and the magnitude of externally impressed fluid flow, as well as between ΣAOM and the depth of the sulfate-methane transition zone (SMTZ). This review contributes to an improved understanding of the global significance of the AOM process, and helps identify outstanding questions and future directions in the modeling of methane cycling and AOM in marine sediments.

  18. Evaluation of the Use of Existing RELAP5-3D Models to Represent the Actinide Burner Test Reactor

    SciTech Connect

    C. B. Davis

    2007-02-01

    The RELAP5-3D code is being considered as a thermal-hydraulic system code to support the development of the sodium-cooled Actinide Burner Test Reactor as part of Global Nuclear Energy Partnership. An evaluation was performed to determine whether the control system could be used to simulate the effects of non-convective mechanisms of heat transport in the fluid that are not currently represented with internal code models, including axial and radial heat conduction in the fluid and subchannel mixing. The evaluation also determined the relative importance of axial and radial heat conduction and fluid mixing on peak cladding temperature for a wide range of steady conditions and during a representative loss-of-flow transient. The evaluation was performed using a RELAP5-3D model of a subassembly in the Experimental Breeder Reactor-II, which was used as a surrogate for the Actinide Burner Test Reactor. An evaluation was also performed to determine if the existing centrifugal pump model could be used to simulate the performance of electromagnetic pumps.

  19. Synthetic cannabinoids: In silico prediction of the cannabinoid receptor 1 affinity by a quantitative structure-activity relationship model.

    PubMed

    Paulke, Alexander; Proschak, Ewgenij; Sommer, Kai; Achenbach, Janosch; Wunder, Cora; Toennes, Stefan W

    2016-03-14

    The number of new synthetic psychoactive compounds increase steadily. Among the group of these psychoactive compounds, the synthetic cannabinoids (SCBs) are most popular and serve as a substitute of herbal cannabis. More than 600 of these substances already exist. For some SCBs the in vitro cannabinoid receptor 1 (CB1) affinity is known, but for the majority it is unknown. A quantitative structure-activity relationship (QSAR) model was developed, which allows the determination of the SCBs affinity to CB1 (expressed as binding constant (Ki)) without reference substances. The chemically advance template search descriptor was used for vector representation of the compound structures. The similarity between two molecules was calculated using the Feature-Pair Distribution Similarity. The Ki values were calculated using the Inverse Distance Weighting method. The prediction model was validated using a cross validation procedure. The predicted Ki values of some new SCBs were in a range between 20 (considerably higher affinity to CB1 than THC) to 468 (considerably lower affinity to CB1 than THC). The present QSAR model can serve as a simple, fast and cheap tool to get a first hint of the biological activity of new synthetic cannabinoids or of other new psychoactive compounds.

  20. A threshold of mechanical strain intensity for the direct activation of osteoblast function exists in a murine maxilla loading model.

    PubMed

    Suzuki, Natsuki; Aoki, Kazuhiro; Marcián, Petr; Borák, Libor; Wakabayashi, Noriyuki

    2016-10-01

    The response to the mechanical loading of bone tissue has been extensively investigated; however, precisely how much strain intensity is necessary to promote bone formation remains unclear. Combination studies utilizing histomorphometric and numerical analyses were performed using the established murine maxilla loading model to clarify the threshold of mechanical strain needed to accelerate bone formation activity. For 7 days, 191 kPa loading stimulation for 30 min/day was applied to C57BL/6J mice. Two regions of interest, the AWAY region (away from the loading site) and the NEAR region (near the loading site), were determined. The inflammatory score increased in the NEAR region, but not in the AWAY region. A strain intensity map obtained from [Formula: see text] images was superimposed onto the images of the bone formation inhibitor, sclerostin-positive cell localization. The number of sclerostin-positive cells significantly decreased after mechanical loading of more than [Formula: see text] in the AWAY region, but not in the NEAR region. The mineral apposition rate, which shows the bone formation ability of osteoblasts, was accelerated at the site of surface strain intensity, namely around [Formula: see text], but not at the site of lower surface strain intensity, which was around [Formula: see text] in the AWAY region, thus suggesting the existence of a strain intensity threshold for promoting bone formation. Taken together, our data suggest that a threshold of mechanical strain intensity for the direct activation of osteoblast function and the reduction of sclerostin exists in a murine maxilla loading model in the non-inflammatory region.

  1. Development of validated quantitative structure-retention relationship models for retention indices of plant essential oils.

    PubMed

    Qin, Li-Tang; Liu, Shu-Shen; Chen, Fu; Wu, Qing-Sheng

    2013-05-01

    Quantitative structure-retention relationship (QSRR) models were developed for the retention indices of 505 frequently reported components of plant essential oils. Multiple linear regression was used to build QSRR models for the dimethyl silicone, dimethyl silicone with 5% phenyl groups, and polyethylene glycol stationary phases. We tried to improve the variable selection and modeling method based on prediction method for selecting the optimum descriptors from the molecular weight, 75 topological indices, and 170 atom-type E-state indices. The three-variable QSRR models perform high correlation coefficients of 0.937 for dimethyl silicone and 0.933 for dimethyl silicone with 5% phenyl groups stationary phase. Four variables were selected to developed QSRR model for the polyethylene glycol stationary phase. The leave-one-out and leave-many-out cross-validations, bootstrapping, and y-randomization test showed the three models are robust and have no chance correlation. The external validation with the test set showed the three models present high externally predictive power. The three models presented high-quality fit, internally, and externally predictive power. It is expected that the models can effectively predict retention indices of essential oils components without experimental value.

  2. Mathematical Modelling of a Brain Tumour Initiation and Early Development: A Coupled Model of Glioblastoma Growth, Pre-Existing Vessel Co-Option, Angiogenesis and Blood Perfusion

    PubMed Central

    Cai, Yan; Wu, Jie; Li, Zhiyong; Long, Quan

    2016-01-01

    We propose a coupled mathematical modelling system to investigate glioblastoma growth in response to dynamic changes in chemical and haemodynamic microenvironments caused by pre-existing vessel co-option, remodelling, collapse and angiogenesis. A typical tree-like architecture network with different orders for vessel diameter is designed to model pre-existing vasculature in host tissue. The chemical substances including oxygen, vascular endothelial growth factor, extra-cellular matrix and matrix degradation enzymes are calculated based on the haemodynamic environment which is obtained by coupled modelling of intravascular blood flow with interstitial fluid flow. The haemodynamic changes, including vessel diameter and permeability, are introduced to reflect a series of pathological characteristics of abnormal tumour vessels including vessel dilation, leakage, angiogenesis, regression and collapse. Migrating cells are included as a new phenotype to describe the migration behaviour of malignant tumour cells. The simulation focuses on the avascular phase of tumour development and stops at an early phase of angiogenesis. The model is able to demonstrate the main features of glioblastoma growth in this phase such as the formation of pseudopalisades, cell migration along the host vessels, the pre-existing vasculature co-option, angiogenesis and remodelling. The model also enables us to examine the influence of initial conditions and local environment on the early phase of glioblastoma growth. PMID:26934465

  3. Mathematical Modelling of a Brain Tumour Initiation and Early Development: A Coupled Model of Glioblastoma Growth, Pre-Existing Vessel Co-Option, Angiogenesis and Blood Perfusion.

    PubMed

    Cai, Yan; Wu, Jie; Li, Zhiyong; Long, Quan

    2016-01-01

    We propose a coupled mathematical modelling system to investigate glioblastoma growth in response to dynamic changes in chemical and haemodynamic microenvironments caused by pre-existing vessel co-option, remodelling, collapse and angiogenesis. A typical tree-like architecture network with different orders for vessel diameter is designed to model pre-existing vasculature in host tissue. The chemical substances including oxygen, vascular endothelial growth factor, extra-cellular matrix and matrix degradation enzymes are calculated based on the haemodynamic environment which is obtained by coupled modelling of intravascular blood flow with interstitial fluid flow. The haemodynamic changes, including vessel diameter and permeability, are introduced to reflect a series of pathological characteristics of abnormal tumour vessels including vessel dilation, leakage, angiogenesis, regression and collapse. Migrating cells are included as a new phenotype to describe the migration behaviour of malignant tumour cells. The simulation focuses on the avascular phase of tumour development and stops at an early phase of angiogenesis. The model is able to demonstrate the main features of glioblastoma growth in this phase such as the formation of pseudopalisades, cell migration along the host vessels, the pre-existing vasculature co-option, angiogenesis and remodelling. The model also enables us to examine the influence of initial conditions and local environment on the early phase of glioblastoma growth.

  4. Quantitative comparison of two 3-D resistivity models of the Montelago geothermal prospect

    NASA Astrophysics Data System (ADS)

    van Leeuwen, W. A.; Suryantini; Hersir, G. P.

    2016-09-01

    A combined TEM-MT survey was carried out in the Montelago geothermal prospect, situated on Mindoro Island, the Philippines, with the aim to obtain the dimensions and depth of the geothermal reservoir as well as to formulate the prospects' conceptual model. The acquired MT data are static shift corrected using the TEM measurements. Two different 3D inversion codes are used to create subsurface resistivity models of the corrected MT data set. The similarities and differences between the two resistivity models are quantitatively assessed using a set of structural metrics. Both resistivity models can be generalized by a three-layered model. The model consists of a thin heterogeneous, conductive layer overlying a thick resistive layer, while the basement has a decreased resistivity. Although this is a common characteristic resistivity response for the alteration mineralogy of a volcanic geothermal system, the temperatures at depth are lower than would be expected when interpreting the modelled resistivity model accordingly. Since the last volcanic activity in the area was about one million years ago, it is anticipated that the resolved resistivity structure is a remnant of a hydrothermal system associated with a volcanic heat source. This model interpretation is validated by the alteration minerals present in the exploration wells, where high temperature minerals such as epidote are present at depths with a lower temperature than epidote's initial formation temperature. This generalized description of the resistivity model is confirmed by both resistivity models. In this paper the two inversion models are not only compared by assessing the inversion models, but also by reviewing a set of gradient based structural metrics. An attempt is made to improve the interpretation of the conceptual model by analyzing these structural metrics. Based on these analyses it is concluded that both inversions resolve similar resistivity structures and that the location of the two slim

  5. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    NASA Astrophysics Data System (ADS)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  6. Quantitative evaluation of lake eutrophication responses under alternative water diversion scenarios: a water quality modeling based statistical analysis approach.

    PubMed

    Liu, Yong; Wang, Yilin; Sheng, Hu; Dong, Feifei; Zou, Rui; Zhao, Lei; Guo, Huaicheng; Zhu, Xiang; He, Bin

    2014-01-15

    China is confronting the challenge of accelerated lake eutrophication, where Lake Dianchi is considered as the most serious one. Eutrophication control for Lake Dianchi began in the mid-1980s. However, decision makers have been puzzled by the lack of visible water quality response to past efforts given the tremendous investment. Therefore, decision makers desperately need a scientifically sound way to quantitatively evaluate the response of lake water quality to proposed management measures and engineering works. We used a water quality modeling based scenario analysis approach to quantitatively evaluate the eutrophication responses of Lake Dianchi to an under-construction water diversion project. The primary analytic framework was built on a three-dimensional hydrodynamic, nutrient fate and transport, as well as algae dynamics model, which has previously been calibrated and validated using historical data. We designed 16 scenarios to analyze the water quality effects of three driving forces, including watershed nutrient loading, variations in diverted inflow water, and lake water level. A two-step statistical analysis consisting of an orthogonal test analysis and linear regression was then conducted to distinguish the contributions of various driving forces to lake water quality. The analysis results show that (a) the different ways of managing the diversion projects would result in different water quality response in Lake Dianchi, though the differences do not appear to be significant; (b) the maximum reduction in annual average and peak Chl-a concentration from the various ways of diversion project operation are respectively 11% and 5%; (c) a combined 66% watershed load reduction and water diversion can eliminate the lake hypoxia volume percentage from the existing 6.82% to 3.00%; and (d) the water diversion will decrease the occurrence of algal blooms, and the effect of algae reduction can be enhanced if diverted water are seasonally allocated such that wet

  7. Satellite contributions to the quantitative characterization of biomass burning for climate modeling

    NASA Astrophysics Data System (ADS)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-07-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  8. A quantitative model with new scaling for silicon carbide particle engulfment during silicon crystal growth

    NASA Astrophysics Data System (ADS)

    Derby, Jeffrey J.; Tao, Yutao; Reimann, Christian; Friedrich, Jochen; Jauß, Thomas; Sorgenfrei, Tina; Cröll, Arne

    2017-04-01

    We present rigorous numerical modeling and analytical arguments to describe data on the engulfment of silicon carbide particles during silicon crystal growth obtained via advanced terrestrial and microgravity experiments. For the first time in over a decade of research on SiC inclusions in silicon, our model is able to provide a quantitative correlation with experimental results, and we are able to unambiguously identify the underlying physical mechanisms that give rise to the observed behavior of this system. In particular, we identify a significant and previously unascertained interaction between particle-induced interface deflection (originating from the thermal conductivity of the SiC particle being larger than that of the surrounding silicon liquid) and curvature-induced changes in melting temperature arising from the Gibbs-Thomson effect. For a particular range of particle sizes, the Gibbs-Thomson effect flattens the deflected solidification interface, thereby reducing drag on the particle and increasing its critical velocity for engulfment. We show via numerical calculations and analytical reasoning that these effects give rise to a new scaling of the critical velocity to particle size as vc ∼R - 5 / 3 , whereas all prior models have predicted either vc ∼R-1 or vc ∼R - 4 / 3 . This new scaling is needed to quantitatively describe the experimental observations for this system.

  9. Development and validation of quantitative structure-activity relationship models for compounds acting on serotoninergic receptors.

    PubMed

    Zydek, Grażyna; Brzezińska, Elżbieta

    2012-01-01

    A quantitative structure-activity relationship (QSAR) study has been made on 20 compounds with serotonin (5-HT) receptor affinity. Thin-layer chromatographic (TLC) data and physicochemical parameters were applied in this study. RP2 TLC 60F(254) plates (silanized) impregnated with solutions of propionic acid, ethylbenzene, 4-ethylphenol, and propionamide (used as analogues of the key receptor amino acids) and their mixtures (denoted as S1-S7 biochromatographic models) were used in two developing phases as a model of drug-5-HT receptor interaction. The semiempirical method AM1 (HyperChem v. 7.0 program) and ACD/Labs v. 8.0 program were employed to calculate a set of physicochemical parameters for the investigated compounds. Correlation and multiple linear regression analysis were used to search for the best QSAR equations. The correlations obtained for the compounds studied represent their interactions with the proposed biochromatographic models. The good multivariate relationships (R(2) = 0.78-0.84) obtained by means of regression analysis can be used for predicting the quantitative effect of biological activity of different compounds with 5-HT receptor affinity. "Leave-one-out" (LOO) and "leave-N-out" (LNO) cross-validation methods were used to judge the predictive power of final regression equations.

  10. Modeling of microfluidic microbial fuel cells using quantitative bacterial transport parameters

    NASA Astrophysics Data System (ADS)

    Mardanpour, Mohammad Mahdi; Yaghmaei, Soheila; Kalantar, Mohammad

    2017-02-01

    The objective of present study is to analyze the dynamic modeling of bioelectrochemical processes and improvement of the performance of previous models using quantitative data of bacterial transport parameters. The main deficiency of previous MFC models concerning spatial distribution of biocatalysts is an assumption of initial distribution of attached/suspended bacteria on electrode or in anolyte bulk which is the foundation for biofilm formation. In order to modify this imperfection, the quantification of chemotactic motility to understand the mechanisms of the suspended microorganisms' distribution in anolyte and/or their attachment to anode surface to extend the biofilm is implemented numerically. The spatial and temporal distributions of the bacteria, as well as the dynamic behavior of the anolyte and biofilm are simulated. The performance of the microfluidic MFC as a chemotaxis assay is assessed by analyzing the bacteria activity, substrate variation, bioelectricity production rate and the influences of external resistance on the biofilm and anolyte's features.

  11. Quantitative Simulations of MST Visual Receptive Field Properties Using a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, J. A.

    1997-01-01

    We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.

  12. The integration of quantitative multi-modality imaging data into mathematical models of tumors

    NASA Astrophysics Data System (ADS)

    Atuegwu, Nkiruka C.; Gore, John C.; Yankeelov, Thomas E.

    2010-05-01

    Quantitative imaging data obtained from multiple modalities may be integrated into mathematical models of tumor growth and treatment response to achieve additional insights of practical predictive value. We show how this approach can describe the development of tumors that appear realistic in terms of producing proliferating tumor rims and necrotic cores. Two established models (the logistic model with and without the effects of treatment) and one novel model built a priori from available imaging data have been studied. We modify the logistic model to predict the spatial expansion of a tumor driven by tumor cell migration after a voxel's carrying capacity has been reached. Depending on the efficacy of a simulated cytoxic treatment, we show that the tumor may either continue to expand, or contract. The novel model includes hypoxia as a driver of tumor cell movement. The starting conditions for these models are based on imaging data related to the tumor cell number (as estimated from diffusion-weighted MRI), apoptosis (from 99mTc-Annexin-V SPECT), cell proliferation and hypoxia (from PET). We conclude that integrating multi-modality imaging data into mathematical models of tumor growth is a promising combination that can capture the salient features of tumor growth and treatment response and this indicates the direction for additional research.

  13. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    EPA Science Inventory

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  14. A Framework for Quantitative Modeling of Neural Circuits Involved in Sleep-to-Wake Transition

    PubMed Central

    Sorooshyari, Siamak; Huerta, Ramón; de Lecea, Luis

    2015-01-01

    Identifying the neuronal circuits and dynamics of sleep-to-wake transition is essential to understanding brain regulation of behavioral states, including sleep–wake cycles, arousal, and hyperarousal. Recent work by different laboratories has used optogenetics to determine the role of individual neuromodulators in state transitions. The optogenetically driven data do not yet provide a multi-dimensional schematic of the mechanisms underlying changes in vigilance states. This work presents a modeling framework to interpret, assist, and drive research on the sleep-regulatory network. We identify feedback, redundancy, and gating hierarchy as three fundamental aspects of this model. The presented model is expected to expand as additional data on the contribution of each transmitter to a vigilance state becomes available. Incorporation of conductance-based models of neuronal ensembles into this model and existing models of cortical excitability will provide more comprehensive insight into sleep dynamics as well as sleep and arousal-related disorders. PMID:25767461

  15. Multi-epitope Models Explain How Pre-existing Antibodies Affect the Generation of Broadly Protective Responses to Influenza

    PubMed Central

    Zarnitsyna, Veronika I.; Lavine, Jennie; Ellebedy, Ali; Ahmed, Rafi; Antia, Rustom

    2016-01-01

    The development of next-generation influenza vaccines that elicit strain-transcendent immunity against both seasonal and pandemic viruses is a key public health goal. Targeting the evolutionarily conserved epitopes on the stem of influenza’s major surface molecule, hemagglutinin, is an appealing prospect, and novel vaccine formulations show promising results in animal model systems. However, studies in humans indicate that natural infection and vaccination result in limited boosting of antibodies to the stem of HA, and the level of stem-specific antibody elicited is insufficient to provide broad strain-transcendent immunity. Here, we use mathematical models of the humoral immune response to explore how pre-existing immunity affects the ability of vaccines to boost antibodies to the head and stem of HA in humans, and, in particular, how it leads to the apparent lack of boosting of broadly cross-reactive antibodies to the stem epitopes. We consider hypotheses where binding of antibody to an epitope: (i) results in more rapid clearance of the antigen; (ii) leads to the formation of antigen-antibody complexes which inhibit B cell activation through Fcγ receptor-mediated mechanism; and (iii) masks the epitope and prevents the stimulation and proliferation of specific B cells. We find that only epitope masking but not the former two mechanisms to be key in recapitulating patterns in data. We discuss the ramifications of our findings for the development of vaccines against both seasonal and pandemic influenza. PMID:27336297

  16. Satellite Contributions to the Quantitative Characterization of Biomass Burning for Climate Modeling

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-01-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  17. Quantitative ultrasound molecular imaging by modeling the binding kinetics of targeted contrast agent

    NASA Astrophysics Data System (ADS)

    Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo

    2017-03-01

    Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into

  18. How predictive quantitative modelling of tissue organisation can inform liver disease pathogenesis.

    PubMed

    Drasdo, Dirk; Hoehme, Stefan; Hengstler, Jan G

    2014-10-01

    From the more than 100 liver diseases described, many of those with high incidence rates manifest themselves by histopathological changes, such as hepatitis, alcoholic liver disease, fatty liver disease, fibrosis, and, in its later stages, cirrhosis, hepatocellular carcinoma, primary biliary cirrhosis and other disorders. Studies of disease pathogeneses are largely based on integrating -omics data pooled from cells at different locations with spatial information from stained liver structures in animal models. Even though this has led to significant insights, the complexity of interactions as well as the involvement of processes at many different time and length scales constrains the possibility to condense disease processes in illustrations, schemes and tables. The combination of modern imaging modalities with image processing and analysis, and mathematical models opens up a promising new approach towards a quantitative understanding of pathologies and of disease processes. This strategy is discussed for two examples, ammonia metabolism after drug-induced acute liver damage, and the recovery of liver mass as well as architecture during the subsequent regeneration process. This interdisciplinary approach permits integration of biological mechanisms and models of processes contributing to disease progression at various scales into mathematical models. These can be used to perform in silico simulations to promote unravelling the relation between architecture and function as below illustrated for liver regeneration, and bridging from the in vitro situation and animal models to humans. In the near future novel mechanisms will usually not be directly elucidated by modelling. However, models will falsify hypotheses and guide towards the most informative experimental design.

  19. Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering

    DTIC Science & Technology

    2014-06-01

    x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF ACRONYMS AND ABBREVIATIONS BOM Base Object Model BPMN Business Process Model & Notation DOD...SysML. There are many variants such as the Unified Profile for DODAF/MODAF (UPDM) and Business Process Model & Notation ( BPMN ) that have origins in

  20. Preclinical MR fingerprinting (MRF) at 7 T: effective quantitative imaging for rodent disease models.

    PubMed

    Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A; Vincent, Jason A; Dell, Katherine M; Drumm, Mitchell L; Brady-Kalnay, Susann M; Griswold, Mark A; Flask, Chris A; Lu, Lan

    2015-03-01

    High-field preclinical MRI scanners are now commonly used to quantitatively assess disease status and the efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical 7.0-T MRI implementation of the highly novel MR fingerprinting (MRF) methodology which has been described previously for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a fast imaging with steady-state free precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 min. This initial high-field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for the quantification of numerous MRI parameters for a wide variety of preclinical imaging applications.

  1. Quantitative evaluation of mucosal vascular contrast in narrow band imaging using Monte Carlo modeling

    NASA Astrophysics Data System (ADS)

    Le, Du; Wang, Quanzeng; Ramella-Roman, Jessica; Pfefer, Joshua

    2012-06-01

    Narrow-band imaging (NBI) is a spectrally-selective reflectance imaging technique for enhanced visualization of superficial vasculature. Prior clinical studies have indicated NBI's potential for detection of vasculature abnormalities associated with gastrointestinal mucosal neoplasia. While the basic mechanisms behind the increased vessel contrast - hemoglobin absorption and tissue scattering - are known, a quantitative understanding of the effect of tissue and device parameters has not been achieved. In this investigation, we developed and implemented a numerical model of light propagation that simulates NBI reflectance distributions. This was accomplished by incorporating mucosal tissue layers and vessel-like structures in a voxel-based Monte Carlo algorithm. Epithelial and mucosal layers as well as blood vessels were defined using wavelength-specific optical properties. The model was implemented to calculate reflectance distributions and vessel contrast values as a function of vessel depth (0.05 to 0.50 mm) and diameter (0.01 to 0.10 mm). These relationships were determined for NBI wavelengths of 410 nm and 540 nm, as well as broadband illumination common to standard endoscopic imaging. The effects of illumination bandwidth on vessel contrast were also simulated. Our results provide a quantitative analysis of the effect of absorption and scattering on vessel contrast. Additional insights and potential approaches for improving NBI system contrast are discussed.

  2. Understanding responder neurobiology in schizophrenia using a quantitative systems pharmacology model: application to iloperidone.

    PubMed

    Geerts, Hugo; Roberts, Patrick; Spiros, Athan; Potkin, Steven

    2015-04-01

    The concept of targeted therapies remains a holy grail for the pharmaceutical drug industry for identifying responder populations or new drug targets. Here we provide quantitative systems pharmacology as an alternative to the more traditional approach of retrospective responder pharmacogenomics analysis and applied this to the case of iloperidone in schizophrenia. This approach implements the actual neurophysiological effect of genotypes in a computer-based biophysically realistic model of human neuronal circuits, is parameterized with human imaging and pathology, and is calibrated by clinical data. We keep the drug pharmacology constant, but allowed the biological model coupling values to fluctuate in a restricted range around their calibrated values, thereby simulating random genetic mutations and representing variability in patient response. Using hypothesis-free Design of Experiments methods the dopamine D4 R-AMPA (receptor-alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptor coupling in cortical neurons was found to drive the beneficial effect of iloperidone, likely corresponding to the rs2513265 upstream of the GRIA4 gene identified in a traditional pharmacogenomics analysis. The serotonin 5-HT3 receptor-mediated effect on interneuron gamma-aminobutyric acid conductance was identified as the process that moderately drove the differentiation of iloperidone versus ziprasidone. This paper suggests that reverse-engineered quantitative systems pharmacology is a powerful alternative tool to characterize the underlying neurobiology of a responder population and possibly identifying new targets.

  3. A Cytomorphic Chip for Quantitative Modeling of Fundamental Bio-Molecular Circuits.

    PubMed

    2015-08-01

    We describe a 0.35 μm BiCMOS silicon chip that quantitatively models fundamental molecular circuits via efficient log-domain cytomorphic transistor equivalents. These circuits include those for biochemical binding with automatic representation of non-modular and loading behavior, e.g., in cascade and fan-out topologies; for representing variable Hill-coefficient operation and cooperative binding; for representing inducer, transcription-factor, and DNA binding; for probabilistic gene transcription with analogic representations of log-linear and saturating operation; for gain, degradation, and dynamics of mRNA and protein variables in transcription and translation; and, for faithfully representing biological noise via tunable stochastic transistor circuits. The use of on-chip DACs and ADCs enables multiple chips to interact via incoming and outgoing molecular digital data packets and thus create scalable biochemical reaction networks. The use of off-chip digital processors and on-chip digital memory enables programmable connectivity and parameter storage. We show that published static and dynamic MATLAB models of synthetic biological circuits including repressilators, feed-forward loops, and feedback oscillators are in excellent quantitative agreement with those from transistor circuits on the chip. Computationally intensive stochastic Gillespie simulations of molecular production are also rapidly reproduced by the chip and can be reliably tuned over the range of signal-to-noise ratios observed in biological cells.

  4. An adaptive model approach for quantitative wrist rigidity evaluation during deep brain stimulation surgery.

    PubMed

    Assis, Sofia; Costa, Pedro; Rosas, Maria Jose; Vaz, Rui; Silva Cunha, Joao Paulo

    2016-08-01

    Intraoperative evaluation of the efficacy of Deep Brain Stimulation includes evaluation of the effect on rigidity. A subjective semi-quantitative scale is used, dependent on the examiner perception and experience. A system was proposed previously, aiming to tackle this subjectivity, using quantitative data and providing real-time feedback of the computed rigidity reduction, hence supporting the physician decision. This system comprised of a gyroscope-based motion sensor in a textile band, placed in the patients hand, which communicated its measurements to a laptop. The latter computed a signal descriptor from the angular velocity of the hand during wrist flexion in DBS surgery. The first approach relied on using a general rigidity reduction model, regardless of the initial severity of the symptom. Thus, to enhance the performance of the previously presented system, we aimed to develop models for high and low baseline rigidity, according to the examiner assessment before any stimulation. This would allow a more patient-oriented approach. Additionally, usability was improved by having in situ processing in a smartphone, instead of a computer. Such system has shown to be reliable, presenting an accuracy of 82.0% and a mean error of 3.4%. Relatively to previous results, the performance was similar, further supporting the importance of considering the cogwheel rigidity to better infer about the reduction in rigidity. Overall, we present a simple, wearable, mobile system, suitable for intra-operatory conditions during DBS, supporting a physician in decision-making when setting stimulation parameters.

  5. Quantitative measurement of eyestrain on 3D stereoscopic display considering the eye foveation model and edge information.

    PubMed

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-05-15

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  6. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    NASA Astrophysics Data System (ADS)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  7. A quantitative validated model reveals two phases of transcriptional regulation for the gap gene giant in Drosophila.

    PubMed

    Hoermann, Astrid; Cicin-Sain, Damjan; Jaeger, Johannes

    2016-03-15

    Understanding eukaryotic transcriptional regulation and its role in development and pattern formation is one of the big challenges in biology today. Most attempts at tackling this problem either focus on the molecular details of transcription factor binding, or aim at genome-wide prediction of expression patterns from sequence through bioinformatics and mathematical modelling. Here we bridge the gap between these two complementary approaches by providing an integrative model of cis-regulatory elements governing the expression of the gap gene giant (gt) in the blastoderm embryo of Drosophila melanogaster. We use a reverse-engineering method, where mathematical models are fit to quantitative spatio-temporal reporter gene expression data to infer the regulatory mechanisms underlying gt expression in its anterior and posterior domains. These models are validated through prediction of gene expression in mutant backgrounds. A detailed analysis of our data and models reveals that gt is regulated by domain-specific CREs at early stages, while a late element drives expression in both the anterior and the posterior domains. Initial gt expression depends exclusively on inputs from maternal factors. Later, gap gene cross-repression and gt auto-activation become increasingly important. We show that auto-regulation creates a positive feedback, which mediates the transition from early to late stages of regulation. We confirm the existence and role of gt auto-activation through targeted mutagenesis of Gt transcription factor binding sites. In summary, our analysis provides a comprehensive picture of spatio-temporal gene regulation by different interacting enhancer elements for an important developmental regulator.

  8. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.

    PubMed

    Pang, Wei; Coghill, George M

    2015-05-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed.

  9. Whole-brain ex-vivo quantitative MRI of the cuprizone mouse model

    PubMed Central

    Hurley, Samuel A.; Vernon, Anthony C.; Torres, Joel; Dell’Acqua, Flavio; Williams, Steve C.R.; Cash, Diana

    2016-01-01

    Myelin is a critical component of the nervous system and a major contributor to contrast in Magnetic Resonance (MR) images. However, the precise contribution of myelination to multiple MR modalities is still under debate. The cuprizone mouse is a well-established model of demyelination that has been used in several MR studies, but these have often imaged only a single slice and analysed a small region of interest in the corpus callosum. We imaged and analyzed the whole brain of the cuprizone mouse ex-vivo using high-resolution quantitative MR methods (multi-component relaxometry, Diffusion Tensor Imaging (DTI) and morphometry) and found changes in multiple regions, including the corpus callosum, cerebellum, thalamus and hippocampus. The presence of inflammation, confirmed with histology, presents difficulties in isolating the sensitivity and specificity of these MR methods to demyelination using this model. PMID:27833805

  10. NetLand: quantitative modeling and visualization of Waddington's epigenetic landscape using probabilistic potential.

    PubMed

    Guo, Jing; Lin, Feng; Zhang, Xiaomeng; Tanavde, Vivek; Zheng, Jie

    2017-01-19

    Waddington's epigenetic landscape is a powerful metaphor for cellular dynamics driven by gene regulatory networks. Its quantitative modeling and visualization, however, remains a challenge, especially when there are more than two genes in the network. A software tool for Waddington's landscape has not been available in the literature. We present NetLand, an open-source software tool for modeling and simulating the kinetic dynamics of gene regulatory networks (GRNs), and visualizing the corresponding Waddington's epigenetic landscape in three dimensions without restriction on the number of genes in a GRN. With an interactive and graphical user interface, NetLand can facilitate the knowledge discovery and experimental design in the study of cell fate regulation (e.g. stem cell differentiation and reprogramming).

  11. Quantitative Circulatory Physiology: an integrative mathematical model of human physiology for medical education.

    PubMed

    Abram, Sean R; Hodnett, Benjamin L; Summers, Richard L; Coleman, Thomas G; Hester, Robert L

    2007-06-01

    We have developed Quantitative Circulatory Physiology (QCP), a mathematical model of integrative human physiology containing over 4,000 variables of biological interactions. This model provides a teaching environment that mimics clinical problems encountered in the practice of medicine. The model structure is based on documented physiological responses within peer-reviewed literature and serves as a dynamic compendium of physiological knowledge. The model is solved using a desktop, Windows-based program, allowing students to calculate time-dependent solutions and interactively alter over 750 parameters that modify physiological function. The model can be used to understand proposed mechanisms of physiological function and the interactions among physiological variables that may not be otherwise intuitively evident. In addition to open-ended or unstructured simulations, we have developed 30 physiological simulations, including heart failure, anemia, diabetes, and hemorrhage. Additional stimulations include 29 patients in which students are challenged to diagnose the pathophysiology based on their understanding of integrative physiology. In summary, QCP allows students to examine, integrate, and understand a host of physiological factors without causing harm to patients. This model is available as a free download for Windows computers at http://physiology.umc.edu/themodelingworkshop.

  12. Pharmacodynamic model of sodium-glucose transporter 2 (SGLT2) inhibition: implications for quantitative translational pharmacology.

    PubMed

    Maurer, Tristan S; Ghosh, Avijit; Haddish-Berhane, Nahor; Sawant-Basak, Aarti; Boustany-Kari, Carine M; She, Li; Leininger, Michael T; Zhu, Tong; Tugnait, Meera; Yang, Xin; Kimoto, Emi; Mascitti, Vincent; Robinson, Ralph P

    2011-12-01

    Sodium-glucose co-transporter-2 (SGLT2) inhibitors are an emerging class of agents for use in the treatment of type 2 diabetes mellitus (T2DM). Inhibition of SGLT2 leads to improved glycemic control through increased urinary glucose excretion (UGE). In this study, a biologically based pharmacokinetic/pharmacodynamic (PK/PD) model of SGLT2 inhibitor-mediated UGE was developed. The derived model was used to characterize the acute PK/PD relationship of the SGLT2 inhibitor, dapagliflozin, in rats. The quantitative translational pharmacology of dapagliflozin was examined through both prospective simulation and direct modeling of mean literature data obtained for dapagliflozin in healthy subjects. Prospective simulations provided time courses of UGE that were of consistent shape to clinical observations, but were modestly biased toward under prediction. Direct modeling provided an improved characterization of the data and precise parameter estimates which were reasonably consistent with those predicted from preclinical data. Overall, these results indicate that the acute clinical pharmacology of SGLT2 inhibitors in healthy subjects can be reasonably well predicted from preclinical data through rational accounting of species differences in pharmacokinetics, physiology, and SGLT2 pharmacology. Because these data can be generated at the earliest stages of drug discovery, the proposed model is useful in the design and development of novel SGLT2 inhibitors. In addition, this model is expected to serve as a useful foundation for future efforts to understand and predict the effects of SGLT2 inhibition under chronic administration and in other patient populations.

  13. 3D/2D Model-to-Image Registration for Quantitative Dietary Assessment.

    PubMed

    Chen, Hsin-Chen; Jia, Wenyan; Li, Zhaoxin; Sun, Yung-Nien; Sun, Mingui

    2012-12-31

    Image-based dietary assessment is important for health monitoring and management because it can provide quantitative and objective information, such as food volume, nutrition type, and calorie intake. In this paper, a new framework, 3D/2D model-to-image registration, is presented for estimating food volume from a single-view 2D image containing a reference object (i.e., a circular dining plate). First, the food is segmented from the background image based on Otsu's thresholding and morphological operations. Next, the food volume is obtained from a user-selected, 3D shape model. The position, orientation and scale of the model are optimized by a model-to-image registration process. Then, the circular plate in the image is fitted and its spatial information is used as constraints for solving the registration problem. Our method takes the global contour information of the shape model into account to obtain a reliable food volume estimate. Experimental results using regularly shaped test objects and realistically shaped food models with known volumes both demonstrate the effectiveness of our method.

  14. A pulsatile flow model for in vitro quantitative evaluation of prosthetic valve regurgitation.

    PubMed

    Giuliatti, S; Gallo, L; Almeida-Filho, O C; Schmidt, A; Marin-Neto, J A; Pelá, C A; Maciel, B C

    2000-03-01

    A pulsatile pressure-flow model was developed for in vitro quantitative color Doppler flow mapping studies of valvular regurgitation. The flow through the system was generated by a piston which was driven by stepper motors controlled by a computer. The piston was connected to acrylic chambers designed to simulate "ventricular" and "atrial" heart chambers. Inside the "ventricular" chamber, a prosthetic heart valve was placed at the inflow connection with the "atrial" chamber while another prosthetic valve was positioned at the outflow connection with flexible tubes, elastic balloons and a reservoir arranged to mimic the peripheral circulation. The flow model was filled with a 0.25% corn starch/water suspension to improve Doppler imaging. A continuous flow pump transferred the liquid from the peripheral reservoir to another one connected to the "atrial" chamber. The dimensions of the flow model were designed to permit adequate imaging by Doppler echocardiography. Acoustic windows allowed placement of transducers distal and perpendicular to the valves, so that the ultrasound beam could be positioned parallel to the valvular flow. Strain-gauge and electromagnetic transducers were used for measurements of pressure and flow in different segments of the system. The flow model was also designed to fit different sizes and types of prosthetic valves. This pulsatile flow model was able to generate pressure and flow in the physiological human range, with independent adjustment of pulse duration and rate as well as of stroke volume. This model mimics flow profiles observed in patients with regurgitant prosthetic valves.

  15. The promises of quantitative systems pharmacology modelling for drug development.

    PubMed

    Knight-Schrijver, V R; Chelliah, V; Cucurull-Sanchez, L; Le Novère, N

    2016-01-01

    Recent growth in annual new therapeutic entity (NTE) approvals by the U.S. Food and Drug Administration (FDA) suggests a positive trend in current research and development (R&D) output. Prior to this, the cost of each NTE was considered to be rising exponentially, with compound failure occurring mainly in clinical phases. Quantitative systems pharmacology (QSP) modelling, as an additional tool in the drug discovery arsenal, aims to further reduce NTE costs and improve drug development success. Through in silico mathematical modelling, QSP can simulate drug activity as perturbations in biological systems and thus understand the fundamental interactions which drive disease pathology, compound pharmacology and patient response. Here we review QSP, pharmacometrics and systems biology models with respect to the diseases covered as well as their clinical relevance and applications. Overall, the majority of modelling focus was aligned with the priority of drug-discovery and clinical trials. However, a few clinically important disease categories, such as Immune System Diseases and Respiratory Tract Diseases, were poorly covered by computational models. This suggests a possible disconnect between clinical and modelling agendas. As a standard element of the drug discovery pipeline the uptake of QSP might help to increase the efficiency of drug development across all therapeutic indications.

  16. How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies

    PubMed Central

    Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A

    2013-01-01

    A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026

  17. Quantitative Structure Activity Relationship Models for the Antioxidant Activity of Polysaccharides

    PubMed Central

    Nie, Kaiying; Wang, Zhaojing

    2016-01-01

    In this study, quantitative structure activity relationship (QSAR) models for the antioxidant activity of polysaccharides were developed with 50% effective concentration (EC50) as the dependent variable. To establish optimum QSAR models, multiple linear regressions (MLR), support vector machines (SVM) and artificial neural networks (ANN) were used, and 11 molecular descriptors were selected. The optimum QSAR model for predicting EC50 of DPPH-scavenging activity consisted of four major descriptors. MLR model gave EC50 = 0.033Ara-0.041GalA-0.03GlcA-0.025PC+0.484, and MLR fitted the training set with R = 0.807. ANN model gave the improvement of training set (R = 0.96, RMSE = 0.018) and test set (R = 0.933, RMSE = 0.055) which indicated that it was more accurately than SVM and MLR models for predicting the DPPH-scavenging activity of polysaccharides. 67 compounds were used for predicting EC50 of the hydroxyl radicals scavenging activity of polysaccharides. MLR model gave EC50 = 0.12PC+0.083Fuc+0.013Rha-0.02UA+0.372. A comparison of results from models indicated that ANN model (R = 0.944, RMSE = 0.119) was also the best one for predicting the hydroxyl radicals scavenging activity of polysaccharides. MLR and ANN models showed that Ara and GalA appeared critical in determining EC50 of DPPH-scavenging activity, and Fuc, Rha, uronic acid and protein content had a great effect on the hydroxyl radicals scavenging activity of polysaccharides. The antioxidant activity of polysaccharide usually was high in MW range of 4000–100000, and the antioxidant activity could be affected simultaneously by other polysaccharide properties, such as uronic acid and Ara. PMID:27685320

  18. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    SciTech Connect

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  19. A Novel Animal Model of Partial Optic Nerve Transection Established Using an Optic Nerve Quantitative Amputator

    PubMed Central

    Wang, Xu; Li, Ying; He, Yan; Liang, Hong-Sheng; Liu, En-Zhong

    2012-01-01

    Background Research into retinal ganglion cell (RGC) degeneration and neuroprotection after optic nerve injury has received considerable attention and the establishment of simple and effective animal models is of critical importance for future progress. Methodology/Principal Findings In the present study, the optic nerves of Wistar rats were semi-transected selectively with a novel optic nerve quantitative amputator. The variation in RGC density was observed with retro-labeled fluorogold at different time points after nerve injury. The densities of surviving RGCs in the experimental eyes at different time points were 1113.69±188.83 RGC/mm2 (the survival rate was 63.81% compared with the contralateral eye of the same animal) 1 week post surgery; 748.22±134.75 /mm2 (46.16% survival rate) 2 weeks post surgery; 505.03±118.67 /mm2 (30.52% survival rate) 4 weeks post surgery; 436.86±76.36 /mm2 (24.01% survival rate) 8 weeks post surgery; and 378.20±66.74 /mm2 (20.30% survival rate) 12 weeks post surgery. Simultaneously, we also measured the axonal distribution of optic nerve fibers; the latency and amplitude of pattern visual evoke potentials (P-VEP); and the variation in pupil diameter response to pupillary light reflex. All of these observations and profiles were consistent with post injury variation characteristics of the optic nerve. These results indicate that we effectively simulated the pathological process of primary and secondary injury after optic nerve injury. Conclusions/Significance The present quantitative transection optic nerve injury model has increased reproducibility, effectiveness and uniformity. This model is an ideal animal model to provide a foundation for researching new treatments for nerve repair after optic nerve and/or central nerve injury. PMID:22973439

  20. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  1. [Quantitative models between canopy hyperspectrum and its component features at apple tree prosperous fruit stage].

    PubMed

    Wang, Ling; Zhao, Geng-xing; Zhu, Xi-cun; Lei, Tong; Dong, Fang

    2010-10-01

    Hyperspectral technique has become the basis of quantitative remote sensing. Hyperspectrum of apple tree canopy at prosperous fruit stage consists of the complex information of fruits, leaves, stocks, soil and reflecting films, which was mostly affected by component features of canopy at this stage. First, the hyperspectrum of 18 sample apple trees with reflecting films was compared with that of 44 trees without reflecting films. It could be seen that the impact of reflecting films on reflectance was obvious, so the sample trees with ground reflecting films should be separated to analyze from those without ground films. Secondly, nine indexes of canopy components were built based on classified digital photos of 44 apple trees without ground films. Thirdly, the correlation between the nine indexes and canopy reflectance including some kinds of conversion data was analyzed. The results showed that the correlation between reflectance and the ratio of fruit to leaf was the best, among which the max coefficient reached 0.815, and the correlation between reflectance and the ratio of leaf was a little better than that between reflectance and the density of fruit. Then models of correlation analysis, linear regression, BP neural network and support vector regression were taken to explain the quantitative relationship between the hyperspectral reflectance and the ratio of fruit to leaf with the softwares of DPS and LIBSVM. It was feasible that all of the four models in 611-680 nm characteristic band are feasible to be used to predict, while the model accuracy of BP neural network and support vector regression was better than one-variable linear regression and multi-variable regression, and the accuracy of support vector regression model was the best. This study will be served as a reliable theoretical reference for the yield estimation of apples based on remote sensing data.

  2. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case.

  3. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    PubMed

    Woo, Hyung-June; Reifman, Jaques

    2012-08-07

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  4. Statistical tests for comparison of quantitative and qualitative models developed with near infrared spectral data

    NASA Astrophysics Data System (ADS)

    Roggo, Y.; Duponchel, L.; Ruckebusch, C.; Huvenne, J.-P.

    2003-06-01

    Near-infrared spectroscopy (NIRS) has been applied for both qualitative and quantitative evaluation of sugar beet. However, chemometrics methods are numerous and a choice criterion is sometime difficult to determine. In order to select the most accurate chemometrics method, statistical tests are developed. In the first part, quantitative models, which predict sucrose content of sugar beet, are compared. To realize a systematic study, 54 models are developed with different spectral pre-treatments (Standard Normal Variate (SNV), Detrending (D), first and second Derivative), different spectral ranges and different regression methods (Principal Component Regression (PCR), Partial Least Squares (PLS), Modified PLS (MPLS)). Analyze of variance and Fisher's tests are computed to compare respectively bias and Standard Error of Prediction Corrected for bias (SEP(C)). The model developed with full spectra pre-treated by SNV, second derivative and MPLS methods gives accurate results: bias is 0.008 and SEP(C) is 0.097 g of sucrose per 100 g of sample on a concentration range between 14 and 21 g/100 g. In the second part, McNemar's test is applied to compare the classification methods. The classifications are used with two data sets: the first data set concerns the disease resistance of sugar beet and the second deals with spectral differences between four spectrometers. The performances of four well-known classification methods are compared on the NIRS data: Linear Discriminant Analysis (LDA), K Nearest Neighbors method (KNN), Simple Modeling of Class Analogy (SIMCA) and Learning Vector Quantization neural network (LVQ) are computed. In this study, the most accurate method (SIMCA) has a prediction rate of 81.9% of good classification on the disease resistance determination and has 99.4% of good classification on the instrument data set.

  5. Evaluation of quantitative structure-activity relationship modeling strategies: local and global models.

    PubMed

    Helgee, Ernst Ahlberg; Carlsson, Lars; Boyer, Scott; Norinder, Ulf

    2010-04-26

    A thorough comparison between different QSAR modeling strategies is presented. The comparison is conducted for local versus global modeling strategies, risk assessment, and computational cost. The strategies are implemented using random forests, support vector machines, and partial least squares. Results are presented for simulated data, as well as for real data, generally indicating that a global modeling strategy is preferred over a local strategy. Furthermore, the results also show that there is an pronounced risk and a comparatively high computational cost when using the local modeling strategies.

  6. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    PubMed

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L; Granata, Christopher; Levin, Michael

    2010-12-17

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science.

  7. A Second-Generation Device for Automated Training and Quantitative Behavior Analyses of Molecularly-Tractable Model Organisms

    PubMed Central

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L.; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  8. Evaluation of New Zealand's high-seas bottom trawl closures using predictive habitat models and quantitative risk assessment.

    PubMed

    Penney, Andrew J; Guinotte, John M

    2013-01-01

    United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas.

  9. Methylene blue does not reverse existing neurofibrillary tangle pathology in the rTg4510 mouse model of tauopathy.

    PubMed

    Spires-Jones, Tara L; Friedman, Taylor; Pitstick, Rose; Polydoro, Manuela; Roe, Allyson; Carlson, George A; Hyman, Bradley T

    2014-03-06

    Alzheimer's disease is characterized pathologically by aggregation of amyloid beta into senile plaques and aggregation of pathologically modified tau into neurofibrillary tangles. While changes in amyloid processing are strongly implicated in disease initiation, the recent failure of amyloid-based therapies has highlighted the importance of tau as a therapeutic target. "Tangle busting" compounds including methylene blue and analogous molecules are currently being evaluated as therapeutics in Alzheimer's disease. Previous studies indicated that methylene blue can reverse tau aggregation in vitro after 10 min, and subsequent studies suggested that high levels of drug reduce tau protein levels (assessed biochemically) in vivo. Here, we tested whether methylene blue could remove established neurofibrillary tangles in the rTg4510 model of tauopathy, which develops robust tangle pathology. We find that 6 weeks of methylene blue dosing in the water from 16 months to 17.5 months of age decreases soluble tau but does not remove sarkosyl insoluble tau, or histologically defined PHF1 or Gallyas positive tangle pathology. These data indicate that methylene blue treatment will likely not rapidly reverse existing tangle pathology.

  10. Integrability conditions between the first and second Cosserat deformation tensor in geometrically nonlinear micropolar models and existence of minimizers

    NASA Astrophysics Data System (ADS)

    Lankeit, Johannes; Neff, Patrizio; Osterbrink, Frank

    2017-02-01

    In this note, we extend integrability conditions for the symmetric stretch tensor U in the polar decomposition of the deformation gradient nabla φ =F=R U to the nonsymmetric case. In doing so, we recover integrability conditions for the first Cosserat deformation tensor. Let F=overline{R} overline{U} with overline{R}:Ω subset R^3longrightarrow {{SO}}(3) and overline{U}:Ω subset R^3longrightarrow GL(3). Then, K:=overline{R}^T{Grad} overline{R}={{Anti}}( Big [overline{U}({{Curl}}overline{U})^T-{1/2{{tr}}(overline{U}({{Curl}}overline{U})^T)/{1}Big ]overline{U}}{det overline{U}}), giving a connection between the first Cosserat deformation tensor overline{U} and the second Cosserat tensor K. (Here, {{Anti}} denotes an isomorphism between R^{3× 3} and So(3):= Ain R^{3× 3× 3} | A.uin so(3)forall uin R^3}). The formula shows that it is not possible to prescribe overline{U} and K independent from each other. We also propose a new energy formulation of geometrically nonlinear Cosserat models which completely separate the effects of nonsymmetric straining and curvature. For very weak constitutive assumptions (no direct boundary condition on rotations, zero Cosserat couple modulus, quadratic curvature energy), we show existence of minimizers in Sobolev spaces.

  11. Estimating cost effectiveness of residential yard trees for improving air quality in Sacramento, California, using existing models

    NASA Astrophysics Data System (ADS)

    McPherson, E. Gregory; Scott, Klaus I.; Simpson, James R.

    The Sacramento Municipal Utility District's (SMUD) shade tree program will result in the planting of 500,000 trees and has been found to produce net benefits from air conditioning savings. In this study we assume three scenarios (base, highest, and lowest benefits) based on the SMUD program and apply Best Available Control Technology (BACT) cost analysis to determine if shade trees planted in residential yards can be a cost effective means to improve air quality. Planting and maintenance costs, pollutant deposition, and biogenic hydrocarbon emissions are estimated annually for 30 years with existing deterministic models. For the base case, the average annual dollar benefit of pollutant uptake was 895 and the cost of biogenic hydrocarbon emissions was 512, for a net pollutant uptake benefit of 383 per 100 trees planted. The uniform annual payment necessary to repay planting and maintenance costs with a 10% rate of interest was 749. When high biogenic hydrocarbon emitting tree species were replaced with low-emitters, the base case benefit-cost ratio (BCR) increased from 0.5: 1 to 0.9: l. The BCR for the "highest" and "lowest" benefit cases were 2.2:1 and -0.8:1, respectively. Although SMUD plantings produce cost effective energy savings, our application of the BACT analysis does not suggest convincing evidence that there is cost savings when only air quality benefits are considered.

  12. Solitary waves and shocks in suprathermal plasmas modelled by the kappa distribution: existence and propagation characteristics from first principles (Invited)

    NASA Astrophysics Data System (ADS)

    Kourakis, I.; Hellberg, M. A.

    2013-12-01

    Space plasmas are often characterized by the presence of energetic particles in the background, e.g. due to various electron acceleration mechanisms [1]. This phenomenon is associated with a power-law dependence at high (superthermal) velocity values, modeled by a kappa-type distribution function, which reproduces observed data more efficiently that the standard Maxwellian distribution approach [2]. It has been shown from first principles that this ubiquitous superthermal feature of plasmas may alter the propagation characteristics of plasma modes, and modify the plasma screening properties [3]. We review, from first principles, of the effect of excess superthermality on the characteristics of electrostatic nonlinear plasma modes. We employ a kappa distribution function [1] to model the deviation of a plasma constituent (electrons, in general) from Maxwellian equlibrium. An excess in superthermal propulation modifies the charge screening mechanism, affecting the dispersion laws of both low- and higher frequency modes substantially. Various experimental observations may thus be interpreted as manifestations of excess superthermality [2]. Focusing on the features of nonlinear excitations (shocks, solitons), we investigate the role of superthermality in their propagation dynamics (existence laws, stability profile) and dynamical profile [3]. The relation to other nonthermal plasma theories is briefly discussed. [1] See V.M. Vasyliunas, J. Geophys. Res. 73, 2839 (1968), for a historical reference; also, V. Pierrard and M. Lazar, Solar Phys. 267, 153 (2010), for a more recent review. [2] M. Hellberg et al, J. Plasma Physics 64, 433 (2000). [3] S. Sultana, I. Kourakis, N.S. Saini, M.A. Hellberg, Phys. Plasmas 17, 032310 (2010); S. Sultana and I. Kourakis, Plasma Phys. Cont. Fus. 53, 045003 (2011); S. Sultana, G. Sarri and I. Kourakis, Phys. Plasmas 19, 012310 (2012); I. Kourakis, S. Sultana and M.A. Hellberg, Plasma Phys. Cont. Fusion, 54, 124001 (2012); G. Williams and

  13. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases.

    PubMed

    Hollingsworth, T Déirdre; Adams, Emily R; Anderson, Roy M; Atkins, Katherine; Bartsch, Sarah; Basáñez, María-Gloria; Behrend, Matthew; Blok, David J; Chapman, Lloyd A C; Coffeng, Luc; Courtenay, Orin; Crump, Ron E; de Vlas, Sake J; Dobson, Andy; Dyson, Louise; Farkas, Hajnal; Galvani, Alison P; Gambhir, Manoj; Gurarie, David; Irvine, Michael A; Jervis, Sarah; Keeling, Matt J; Kelly-Hope, Louise; King, Charles; Lee, Bruce Y; Le Rutte, Epke A; Lietman, Thomas M; Ndeffo-Mbah, Martial; Medley, Graham F; Michael, Edwin; Pandey, Abhishek; Peterson, Jennifer K; Pinsent, Amy; Porco, Travis C; Richardus, Jan Hendrik; Reimer, Lisa; Rock, Kat S; Singh, Brajendra K; Stolk, Wilma; Swaminathan, Subramanian; Torr, Steve J; Townsend, Jeffrey; Truscott, James; Walker, Martin; Zoueva, Alexandra

    2015-12-09

    Quantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an overview of a collection of novel model-based analyses which aim to address key questions on the dynamics of transmission and control of nine NTDs: Chagas disease, visceral leishmaniasis, human African trypanosomiasis, leprosy, soil-transmitted helminths, schistosomiasis, lymphatic filariasis, onchocerciasis and trachoma. Several common themes resonate throughout these analyses, including: the importance of epidemiological setting on the success of interventions; targeting groups who are at highest risk of infection or re-infection; and reaching populations who are not accessing interventions and may act as a reservoir for infection,. The results also highlight the challenge of maintaining elimination 'as a public health problem' when true elimination is not reached. The models elucidate the factors that may be contributing most to persistence of disease and discuss the requirements for eventually achieving true elimination, if that is possible. Overall this collection presents new analyses to inform current control initiatives. These papers form a base from which further development of the models and more rigorous validation against a variety of datasets can help to give more detailed advice. At the moment, the models' predictions are being considered as the world prepares for a final push towards control or elimination of neglected tropical diseases by 2020.

  14. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015

    PubMed Central

    Sobkowicz, Pawel

    2016-01-01

    We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O) approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions)—which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be ‘invaded’ by a newcomer third party very quickly, while the second remains immune to such invasion. PMID:27171226

  15. Quantitative Structure – Property Relationship Modeling of Remote Liposome Loading Of Drugs

    PubMed Central

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-01-01

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a dataset including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and five-fold external validation. The external prediction accuracy for binary models was as high as 91–96%; for continuous models the mean coefficient R2 for regression between predicted versus observed values was 0.76–0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. PMID:22154932

  16. Non Linear Programming (NLP) Formulation for Quantitative Modeling of Protein Signal Transduction Pathways

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  17. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  18. Antiproliferative Pt(IV) complexes: synthesis, biological activity, and quantitative structure-activity relationship modeling.

    PubMed

    Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico

    2010-09-01

    Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.

  19. Quantitative structure-property relationship modeling of remote liposome loading of drugs.

    PubMed

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-06-10

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a data set including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and 5-fold external validation. The external prediction accuracy for binary models was as high as 91-96%; for continuous models the mean coefficient R(2) for regression between predicted versus observed values was 0.76-0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments.

  20. Quantitative modeling of bioconcentration factors of carbonyl herbicides using multivariate image analysis.

    PubMed

    Freitas, Mirlaine R; Barigye, Stephen J; Daré, Joyce K; Freitas, Matheus P

    2016-06-01

    The bioconcentration factor (BCF) is an important parameter used to estimate the propensity of chemicals to accumulate in aquatic organisms from the ambient environment. While simple regressions for estimating the BCF of chemical compounds from water solubility or the n-octanol/water partition coefficient have been proposed in the literature, these models do not always yield good correlations and more descriptive variables are required for better modeling of BCF data for a given series of organic pollutants, such as some herbicides. Thus, the logBCF values for a set of carbonyl herbicides comprising amide, urea, carbamate and thiocarbamate groups were quantitatively modeled using multivariate image analysis (MIA) descriptors, derived from colored image representations for chemical structures. The logBCF model was calibrated and vigorously validated (r(2) = 0.79, q(2) = 0.70 and rtest(2) = 0.81), providing a comprehensive three-parameter linear equation after variable selection (logBCF = 5.682 - 0.00233 × X9774 - 0.00070 × X813 - 0.00273 × X5144); the variables represent pixel coordinates in the multivariate image. Finally, chemical interpretation of the obtained models in terms of the structural characteristics responsible for the enhanced or reduced logBCF values was performed, providing key leads in the prospective development of more eco-friendly synthetic herbicides.

  1. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    PubMed

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation.

  2. Quantitative saltwater modeling for validation of sub-grid scale LES turbulent mixing and transport models for fire

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro; Marshall, Andre; Gollner, Michael

    2015-11-01

    A quantitative understanding of turbulent mixing and transport in buoyant flows is indispensable for accurate modeling of combustion, fire dynamics and smoke transport used in both fire safety design and investigation. This study describes the turbulent mixing behavior of scaled, unconfined plumes using a quantitative saltwater modeling technique. An analysis of density difference turbulent fluctuations, captured as the collected images scale down in resolution, allows for the determination of the largest dimension over which LES averaging should be performed. This is important as LES models must assume a distribution for sub-grid scale mixing, such as the ?-PDF distribution. We showed that there is a loss of fidelity in resolving the flow for a cell size above 0 . 54D* ; where D* is a characteristic length scale for the plume. Such a point represents the threshold above which the fluctuations start to monotonically grow. Turbulence statistics were also analyzed in terms of span-wise intermittency and time and space correlation coefficients. An unexpected condition for the core of the plume, where a substantial amount of ambient fluid (fresh water) is found, and the mixing process under buoyant conditions were found depending on the resolution of measurements used.

  3. The role of pre-existing tectonic structures and magma chamber shape on the geometry of resurgent blocks: Analogue models

    NASA Astrophysics Data System (ADS)

    Marotta, Enrica; de Vita, Sandro

    2014-02-01

    A set of analogue models has been carried out to understand the role of an asymmetric magma chamber on the resurgence-related deformation of a previously deformed crustal sector. The results are then compared with those of similar experiments, previously performed using a symmetric magma chamber. Two lines of experiments were performed to simulate resurgence in an area with a simple graben-like structure and resurgence in a caldera that collapsed within the previously generated graben-like structure. On the basis of commonly accepted scaling laws, we used dry-quartz sand to simulate the brittle behaviour of the crust and Newtonian silicone to simulate the ductile behaviour of the intruding magma. An asymmetric shape of the magma chamber was simulated by moulding the upper surface of the silicone. The resulting empty space was then filled with sand. The results of the asymmetric-resurgence experiments are similar to those obtained with symmetrically shaped silicone. In the sample with a simple graben-like structure, resurgence occurs through the formation of a discrete number of differentially displaced blocks. The most uplifted portion of the deformed depression floor is affected by newly formed, high-angle, inward-dipping reverse ring-faults. The least uplifted portion of the caldera is affected by normal faults with similar orientation, either newly formed or resulting from reactivation of the pre-existing graben faults. This asymmetric block resurgence is also observed in experiments performed with a previous caldera collapse. In this case, the caldera-collapse-related reverse ring-fault is completely erased along the shortened side, and enhances the effect of the extensional faults on the opposite side, so facilitating the intrusion of the silicone. The most uplifted sector, due to an asymmetrically shaped intrusion, is always in correspondence of the thickest overburden. These results suggest that the stress field induced by resurgence is likely dictated by

  4. Quantitative Limits on Small Molecule Transport via the Electropermeome - Measuring and Modeling Single Nanosecond Perturbations.

    PubMed

    Sözer, Esin B; Levine, Zachary A; Vernier, P Thomas

    2017-12-01

    The detailed molecular mechanisms underlying the permeabilization of cell membranes by pulsed electric fields (electroporation) remain obscure despite decades of investigative effort. To advance beyond descriptive schematics to the development of robust, predictive models, empirical parameters in existing models must be replaced with physics- and biology-based terms anchored in experimental observations. We report here absolute values for the uptake of YO-PRO-1, a small-molecule fluorescent indicator of membrane integrity, into cells after a single electric pulse lasting only 6 ns. We correlate these measured values, based on fluorescence microphotometry of hundreds of individual cells, with a diffusion-based geometric analysis of pore-mediated transport and with molecular simulations of transport across electropores in a phospholipid bilayer. The results challenge the "drift and diffusion through a pore" model that dominates conventional explanatory schemes for the electroporative transfer of small molecules into cells and point to the necessity for a more complex model.

  5. 40 CFR Table 4 to Subpart Bbbb of... - Model Rule-Class II Emission Limits for Existing Small Municipal Waste Combustion Unit a

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Existing Small Municipal Waste Combustion Unit a 4 Table 4 to Subpart BBBB of Part 60 Protection of... NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion... Part 60—Model Rule—Class II Emission Limits for Existing Small Municipal Waste Combustion Unit a...

  6. 40 CFR Table 2 to Subpart Bbbb of... - Model Rule-Class I Emission Limits for Existing Small Municipal Waste Combustion Units a

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Existing Small Municipal Waste Combustion Units a 2 Table 2 to Subpart BBBB of Part 60 Protection of... NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion... Part 60—Model Rule—Class I Emission Limits for Existing Small Municipal Waste Combustion Units a...

  7. Quantitative Proteomic and Phosphoproteomic Comparison of 2D and 3D Colon Cancer Cell Culture Models.

    PubMed

    Yue, Xiaoshan; Lukowski, Jessica K; Weaver, Eric M; Skube, Susan B; Hummon, Amanda B

    2016-12-02

    Cell cultures are widely used model systems. Some immortalized cell lines can be grown in either two-dimensional (2D) adherent monolayers or in three-dimensional (3D) multicellular aggregates, or spheroids. Here, the quantitative proteome and phosphoproteome of colon carcinoma HT29 cells cultures in 2D monolayers and 3D spheroids were compared with a stable isotope labeling of amino acids (SILAC) labeling strategy. Two biological replicates from each sample were examined, and notable differences in both the proteome and the phosphoproteome were determined by nanoliquid chromatography tandem mass spectrometry (LC-MS/MS) to assess how growth configuration affects molecular expression. A total of 5867 protein groups, including 2523 phosphoprotein groups and 8733 phosphopeptides were identified in the samples. The Gene Ontology analysis revealed enriched GO terms in the 3D samples for RNA binding, nucleic acid binding, enzyme binding, cytoskeletal protein binding, and histone binding for their molecular functions (MF) and in the process of cell cycle, cytoskeleton organization, and DNA metabolic process for the biological process (BP). The KEGG pathway analysis indicated that 3D cultures are enriched for oxidative phosphorylation pathways, metabolic pathways, peroxisome pathways, and biosynthesis of amino acids. In contrast, analysis of the phosphoproteomes indicated that 3D cultures have decreased phosphorylation correlating with slower growth rates and lower cell-to-extracellular matrix interactions. In sum, these results provide quantitative assessments of the effects on the proteome and phosphoproteome of culturing cells in 2D versus 3D cell culture configurations.

  8. Climate change and dengue: a critical and systematic review of quantitative modelling approaches

    PubMed Central

    2014-01-01

    Background Many studies have found associations between climatic conditions and dengue transmission. However, there is a debate about the future impacts of climate change on dengue transmission. This paper reviewed epidemiological evidence on the relationship between climate and dengue with a focus on quantitative methods for assessing the potential impacts of climate change on global dengue transmission. Methods A literature search was conducted in October 2012, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search focused on peer-reviewed journal articles published in English from January 1991 through October 2012. Results Sixteen studies met the inclusion criteria and most studies showed that the transmission of dengue is highly sensitive to climatic conditions, especially temperature, rainfall and relative humidity. Studies on the potential impacts of climate change on dengue indicate increased climatic suitability for transmission and an expansion of the geographic regions at risk during this century. A variety of quantitative modelling approaches were used in the studies. Several key methodological issues and current knowledge gaps were identified through this review. Conclusions It is important to assemble spatio-temporal patterns of dengue transmission compatible with long-term data on climate and other socio-ecological changes and this would advance projections of dengue risks associated with climate change. PMID:24669859

  9. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    NASA Technical Reports Server (NTRS)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  10. 40 CFR Table 4 to Subpart Mmmm of... - Model Rule-Operating Parameters for Existing Sewage Sludge Incineration Units a

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Existing Sewage Sludge Incineration Units a 4 Table 4 to Subpart MMMM of Part 60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Pt....

  11. 40 CFR Table 4 to Subpart Mmmm of... - Model Rule-Operating Parameters for Existing Sewage Sludge Incineration Units a

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Existing Sewage Sludge Incineration Units a 4 Table 4 to Subpart MMMM of Part 60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Pt....

  12. Bayesian model choice and search strategies for mapping interacting quantitative trait Loci.

    PubMed Central

    Yi, Nengjun; Xu, Shizhong; Allison, David B

    2003-01-01

    Most complex traits of animals, plants, and humans are influenced by multiple genetic and environmental factors. Interactions among multiple genes play fundamental roles in the genetic control and evolution of complex traits. Statistical modeling of interaction effects in quantitative trait loci (QTL) analysis must accommodate a very large number of potential genetic effects, which presents a major challenge to determining the genetic model with respect to the number of QTL, their positions, and their genetic effects. In this study, we use the methodology of Bayesian model and variable selection to develop strategies for identifying multiple QTL with complex epistatic patterns in experimental designs with two segregating genotypes. Specifically, we develop a reversible jump Markov chain Monte Carlo algorithm to determine the number of QTL and to select main and epistatic effects. With the proposed method, we can jointly infer the genetic model of a complex trait and the associated genetic parameters, including the number, positions, and main and epistatic effects of the identified QTL. Our method can map a large number of QTL with any combination of main and epistatic effects. Utility and flexibility of the method are demonstrated using both simulated data and a real data set. Sensitivity of posterior inference to prior specifications of the number and genetic effects of QTL is investigated. PMID:14573494

  13. Quantitative structure-property relationship modeling of Grätzel solar cell dyes.

    PubMed

    Venkatraman, Vishwesh; Åstrand, Per-Olof; Alsberg, Bjørn Kåre

    2014-01-30

    With fossil fuel reserves on the decline, there is increasing focus on the design and development of low-cost organic photovoltaic devices, in particular, dye-sensitized solar cells (DSSCs). The power conversion efficiency (PCE) of a DSSC is heavily influenced by the chemical structure of the dye. However, as far as we know, no predictive quantitative structure-property relationship models for DSSCs with PCE as one of the response variables have been reported. Thus, we report for the first time the successful application of comparative molecular field analysis (CoMFA) and vibrational frequency-based eigenvalue (EVA) descriptors to model molecular structure-photovoltaic performance relationships for a set of 40 coumarin derivatives. The results show that the models obtained provide statistically robust predictions of important photovoltaic parameters such as PCE, the open-circuit voltage (V(OC)), short-circuit current (J(SC)) and the peak absorption wavelength λ(max). Some of our findings based on the analysis of the models are in accordance with those reported in the literature. These structure-property relationships can be applied to the rational structural design and evaluation of new photovoltaic materials.

  14. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    PubMed

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  15. Genome-wide evaluation for quantitative trait loci under the variance component model

    PubMed Central

    Han, Lide

    2010-01-01

    The identity-by-descent (IBD) based variance component analysis is an important method for mapping quantitative trait loci (QTL) in outbred populations. The interval-mapping approach and various modified versions of it may have limited use in evaluating the genetic variances of the entire genome because they require evaluation of multiple models and model selection. In this study, we developed a multiple variance component model for genome-wide evaluation using both the maximum likelihood (ML) method and the MCMC implemented Bayesian method. We placed one QTL in every few cM on the entire genome and estimated the QTL variances and positions simultaneously in a single model. Genomic regions that have no QTL usually showed no evidence of QTL while regions with large QTL always showed strong evidence of QTL. While the Bayesian method produced the optimal result, the ML method is computationally more efficient than the Bayesian method. Simulation experiments were conducted to demonstrate the efficacy of the new methods. Electronic supplementary material The online version of this article (doi:10.1007/s10709-010-9497-1) contains supplementary material, which is available to authorized users. PMID:20835884

  16. Investigation and prediction of protein precipitation by polyethylene glycol using quantitative structure-activity relationship models.

    PubMed

    Hämmerling, Frank; Ladd Effio, Christopher; Andris, Sebastian; Kittelmann, Jörg; Hubbuch, Jürgen

    2017-01-10

    Precipitation of proteins is considered to be an effective purification method for proteins and has proven its potential to replace costly chromatography processes. Besides salts and polyelectrolytes, polymers, such as polyethylene glycol (PEG), are commonly used for precipitation applications under mild conditions. Process development, however, for protein precipitation steps still is based mainly on heuristic approaches and high-throughput experimentation due to a lack of understanding of the underlying mechanisms. In this work we apply quantitative structure-activity relationships (QSARs) to model two parameters, the discontinuity point m* and the β-value, that describe the complete precipitation curve of a protein under defined conditions. The generated QSAR models are sensitive to the protein type, pH, and ionic strength. It was found that the discontinuity point m* is mainly dependent on protein molecular structure properties and electrostatic surface properties, whereas the β-value is influenced by the variance in electrostatics and hydrophobicity on the protein surface. The models for m* and the β-value exhibit a good correlation between observed and predicted data with a coefficient of determination of R(2)≥0.90 and, hence, are able to accurately predict precipitation curves for proteins. The predictive capabilities were demonstrated for a set of combinations of protein type, pH, and ionic strength not included in the generation of the models and good agreement between predicted and experimental data was achieved.

  17. Thermal models, stable isotopes and cooling ages from the incrementally constructed Tuolumne batholith, Sierra Nevada: why large chambers did exist

    NASA Astrophysics Data System (ADS)

    Paterson, S. R.; Okaya, D. A.; Memeti, V.; Mundil, R.; Lackey, J.; Clemens-Knott, D.

    2009-12-01

    ,00-500,000 years, (2) the outer margins of the main chamber solidified prior to emplacement of inner magma batches, but that (3) large parts of the main chamber stayed above the solidus for 1-2 million years resulting in large magma chambers. Our thermochronology (U-Pb zircon and titanite, 40Ar/39Ar of hornblende and large and small biotite populations) in general agree with the above conclusions but do show some intriguing differences from the thermal modeling predictions particularly in locations where we think parts of the chamber were removed by or recycled into younger pulses. Finally the conclusion that large magma chambers existed matches our geochemical studies, which indicate that in situ fractionation dominated in the rapidly crystallized magma lobes whereas additional mixing processes obscured fractionation patterns in the more slowly crystallized main chambers explaining the more complex compositional patterns and mineral histories in this part of the batholith.

  18. A Suite of Models to Support the Quantitative Assessment of Spread in Pest Risk Analysis

    PubMed Central

    Robinet, Christelle; Kehlenbeck, Hella; Kriticos, Darren J.; Baker, Richard H. A.; Battisti, Andrea; Brunel, Sarah; Dupin, Maxime; Eyre, Dominic; Faccoli, Massimo; Ilieva, Zhenya; Kenis, Marc; Knight, Jon; Reynaud, Philippe; Yart, Annie; van der Werf, Wopke

    2012-01-01

    Pest Risk Analyses (PRAs) are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens). Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice. PMID:23056174

  19. A suite of models to support the quantitative assessment of spread in pest risk analysis.

    PubMed

    Robinet, Christelle; Kehlenbeck, Hella; Kriticos, Darren J; Baker, Richard H A; Battisti, Andrea; Brunel, Sarah; Dupin, Maxime; Eyre, Dominic; Faccoli, Massimo; Ilieva, Zhenya; Kenis, Marc; Knight, Jon; Reynaud, Philippe; Yart, Annie; van der Werf, Wopke

    2012-01-01

    Pest Risk Analyses (PRAs) are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens). Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice.

  20. An Efficient Bayesian Model Selection Approach for Interacting Quantitative Trait Loci Models With Many Effects

    PubMed Central

    Yi, Nengjun; Shriner, Daniel; Banerjee, Samprit; Mehta, Tapan; Pomp, Daniel; Yandell, Brian S.

    2007-01-01

    We extend our Bayesian model selection framework for mapping epistatic QTL in experimental crosses to include environmental effects and gene–environment interactions. We propose a new, fast Markov chain Monte Carlo algorithm to explore the posterior distribution of unknowns. In addition, we take advantage of any prior knowledge about genetic architecture to increase posterior probability on more probable models. These enhancements have significant computational advantages in models with many effects. We illustrate the proposed method by detecting new epistatic and gene–sex interactions for obesity-related traits in two real data sets of mice. Our method has been implemented in the freely available package R/qtlbim (http://www.qtlbim.org) to facilitate the general usage of the Bayesian methodology for genomewide interacting QTL analysis. PMID:17483424

  1. Final Report for Dynamic Models for Causal Analysis of Panel Data. Models for Change in Quantitative Variables, Part II Scholastic Models. Part II, Chapter 4.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This document is part of a series of chapters described in SO 011 759. Stochastic models for the sociological analysis of change and the change process in quantitative variables are presented. The author lays groundwork for the statistical treatment of simple stochastic differential equations (SDEs) and discusses some of the continuities of…

  2. Quantitative Comparison of a New Ab Initio Micrometeor Ablation Model with an Observationally Verifiable Standard Model

    NASA Astrophysics Data System (ADS)

    Meisel, David D.; Szasz, Csilla; Kero, Johan

    2008-06-01

    The Arecibo UHF radar is able to detect the head-echos of micron-sized meteoroids up to velocities of 75 km/s over a height range of 80 140 km. Because of their small size there are many uncertainties involved in calculating their above atmosphere properties as needed for orbit determination. An ab initio model of meteor ablation has been devised that should work over the mass range 10-16 kg to 10-7 kg, but the faint end of this range cannot be observed by any other method and so direct verification is not possible. On the other hand, the EISCAT UHF radar system detects micrometeors in the high mass part of this range and its observations can be fit to a “standard” ablation model and calibrated to optical observations (Szasz et al. 2007). In this paper, we present a preliminary comparison of the two models, one observationally confirmable. Among the features of the ab initio model that are different from the “standard” model are: (1) uses the experimentally based low pressure vaporization theory of O’Hanlon (A users’s guide to vacuum technology, 2003) for ablation, (2) uses velocity dependent functions fit from experimental data on heat transfer, luminosity and ionization efficiencies measured by Friichtenicht and Becker (NASA Special Publication 319: 53, 1973) for micron sized particles, (3) assumes a density and temperature dependence of the micrometeoroids and ablation product specific heats, (4) assumes a density and size dependent value for the thermal emissivity and (5) uses a unified synthesis of experimental data for the most important meteoroid elements and their oxides through least square fits (as functions of temperature, density, and/or melting point) of the tables of thermodynamic parameters given in Weast (CRC Handbook of Physics and Chemistry, 1984), Gray (American Institute of Physics Handbook, 1972), and Cox (Allen’s Astrophysical Quantities 2000). This utilization of mostly experimentally determined data is the main reason for

  3. Validation of Quantitative Structure-Activity Relationship (QSAR) Model for Photosensitizer Activity Prediction

    PubMed Central

    Frimayanti, Neni; Yam, Mun Li; Lee, Hong Boon; Othman, Rozana; Zain, Sharifuddin M.; Rahman, Noorsaadah Abd.

    2011-01-01

    Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR) method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT) activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA) method. Based on the method, r2 value, r2 (CV) value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 μM to 7.04 μM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set) of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set. PMID:22272096

  4. Effect of Arterial Deprivation on Growing Femoral Epiphysis: Quantitative Magnetic Resonance Imaging Using a Piglet Model

    PubMed Central

    Cheon, Jung-Eun; Kim, In-One; Kim, Woo Sun; Choi, Young Hun

    2015-01-01

    Objective To investigate the usefulness of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion MRI for the evaluation of femoral head ischemia. Materials and Methods Unilateral femoral head ischemia was induced by selective embolization of the medial circumflex femoral artery in 10 piglets. All MRIs were performed immediately (1 hour) and after embolization (1, 2, and 4 weeks). Apparent diffusion coefficients (ADCs) were calculated for the femoral head. The estimated pharmacokinetic parameters (Kep and Ve from two-compartment model) and semi-quantitative parameters including peak enhancement, time-to-peak (TTP), and contrast washout were evaluated. Results The epiphyseal ADC values of the ischemic hip decreased immediately (1 hour) after embolization. However, they increased rapidly at 1 week after embolization and remained elevated until 4 weeks after embolization. Perfusion MRI of ischemic hips showed decreased epiphyseal perfusion with decreased Kep immediately after embolization. Signal intensity-time curves showed delayed TTP with limited contrast washout immediately post-embolization. At 1-2 weeks after embolization, spontaneous reperfusion was observed in ischemic epiphyses. The change of ADC (p = 0.043) and Kep (p = 0.043) were significantly different between immediate (1 hour) after embolization and 1 week post-embolization. Conclusion Diffusion MRI and pharmacokinetic model obtained from the DCE-MRI are useful in depicting early changes of perfusion and tissue damage using the model of femoral head ischemia in skeletally immature piglets. PMID:25995692

  5. Quantitative structure-retention relationship modeling of gas chromatographic retention times based on thermodynamic data.

    PubMed

    Ebrahimi-Najafabadi, Heshmatollah; McGinitie, Teague M; Harynuk, James J

    2014-09-05

    Thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for 156 compounds comprising alkanes, alkyl halides and alcohols were determined for a 5% phenyl 95% methyl stationary phase. The determination of thermodynamic parameters relies on a Nelder-Mead simplex optimization to rapidly obtain the parameters. Two methodologies of external and leave one out cross validations were applied to assess the robustness of the estimations of thermodynamic parameters. The largest absolute errors in predicted retention time across all temperature ramps and all compounds were 1.5 and 0.3s for external and internal sets, respectively. The possibility of an in silico extension of the thermodynamic library was tested using a quantitative structure-retention relationship (QSRR) methodology. The estimated thermodynamic parameters were utilized to develop QSRR models. Individual partial least squares (PLS) models were developed for each of the three classes of the molecules. R(2) values for the test sets of all models across all temperature ramps were larger than 0.99 and the average of relative errors in retention time predictions of the test sets for alkanes, alcohols, and alkyl halides were 1.8%, 2.4%, and 2.5%, respectively.

  6. Modelling passive diastolic mechanics with quantitative MRI of cardiac structure and function.

    PubMed

    Wang, Vicky Y; Lam, H I; Ennis, Daniel B; Cowan, Brett R; Young, Alistair A; Nash, Martyn P

    2009-10-01

    The majority of patients with clinically diagnosed heart failure have normal systolic pump function and are commonly categorized as suffering from diastolic heart failure. The left ventricle (LV) remodels its structure and function to adapt to pathophysiological changes in geometry and loading conditions, which in turn can alter the passive ventricular mechanics. In order to better understand passive ventricular mechanics, a LV finite element (FE) model was customized to geometric data segmented from in vivo tagged magnetic resonance images (MRI) data and myofibre orientation derived from ex vivo diffusion tensor MRI (DTMRI) of a canine heart using nonlinear finite element fitting techniques. MRI tissue tagging enables quantitative evaluation of cardiac mechanical function with high spatial and temporal resolution, whilst the direction of maximum water diffusion in each voxel of a DTMRI directly corresponds to the local myocardial fibre orientation. Due to differences in myocardial geometry between in vivo and ex vivo imaging, myofibre orientations were mapped into the geometric FE model using host mesh fitting (a free form deformation technique). Pressure recordings, temporally synchronized to the tagging data, were used as the loading constraints to simulate the LV deformation during diastole. Simulation of diastolic LV mechanics allowed us to estimate the stiffness of the passive LV myocardium based on kinematic data obtained from tagged MRI. Integrated physiological modelling of this kind will allow more insight into mechanics of the LV on an individualized basis, thereby improving our understanding of the underlying structural basis of mechanical dysfunction under pathological conditions.

  7. Quantitative systems pharmacology as an extension of PK/PD modeling in CNS research and development.

    PubMed

    Geerts, Hugo; Spiros, Athan; Roberts, Patrick; Carr, Robert

    2013-06-01

    Quantitative systems pharmacology (QSP) is a recent addition to the modeling and simulation toolbox for drug discovery and development and is based upon mathematical modeling of biophysical realistic biological processes in the disease area of interest. The combination of preclinical neurophysiology information with clinical data on pathology, imaging and clinical scales makes it a real translational tool. We will discuss the specific characteristics of QSP and where it differs from PK/PD modeling, such as the ability to provide support in target validation, clinical candidate selection and multi-target MedChem projects. In clinical development the approach can provide additional and unique evaluation of the effect of comedications, genotypes and disease states (patient populations) even before the initiation of actual trials. A powerful property is the ability to perform failure analysis. By giving examples from the CNS R&D field in schizophrenia and Alzheimer's disease, we will illustrate how this approach can make a difference for CNS R&D projects.

  8. Hybrid agent-based model for quantitative in-silico cell-free protein synthesis.

    PubMed

    Semenchenko, Anton; Oliveira, Guilherme; Atman, A P F

    2016-12-01

    An advanced vision of the mRNA translation is presented through a hybrid modeling approach. The dynamics of the polysome formation was investigated by computer simulation that combined agent-based model and fine-grained Markov chain representation of the chemical kinetics. This approach allowed for the investigation of the polysome dynamics under non-steady-state and non-continuum conditions. The model is validated by the quantitative comparison of the simulation results and Luciferase protein production in cell-free system, as well as by testing of the hypothesis regarding the two possible mechanisms of the Edeine antibiotic. Calculation of the Hurst exponent demonstrated a relationship between the microscopic properties of amino acid elongation and the fractal dimension of the translation duration time series. The temporal properties of the amino acid elongation have indicated an anti-persistent behavior under low mRNA occupancy and evinced the appearance of long range interactions within the mRNA-ribosome system for high ribosome density. The dynamic and temporal characteristics of the polysomal system presented here can have a direct impact on the studies of the co-translation protein folding and provide a validated platform for cell-free system studies.

  9. Quantitative analysis and modeling of katanin function in flagellar length control

    PubMed Central

    Kannegaard, Elisa; Rego, E. Hesper; Schuck, Sebastian; Feldman, Jessica L.; Marshall, Wallace F.

    2014-01-01

    Flagellar length control in Chlamydomonas reinhardtii provides a simple model system in which to investigate the general question of how cells regulate organelle size. Previous work demonstrated that Chlamydomonas cytoplasm contains a pool of flagellar precursor proteins sufficient to assemble a half-length flagellum and that assembly of full-length flagella requires synthesis of additional precursors to augment the preexisting pool. The regulatory systems that control the synthesis and regeneration of this pool are not known, although transcriptional regulation clearly plays a role. We used quantitative analysis of length distributions to identify candidate genes controlling pool regeneration and found that a mutation in the p80 regulatory subunit of katanin, encoded by the PF15 gene in Chlamydomonas, alters flagellar length by changing the kinetics of precursor pool utilization. This finding suggests a model in which flagella compete with cytoplasmic microtubules for a fixed pool of tubulin, with katanin-mediated severing allowing easier access to this pool during flagellar assembly. We tested this model using a stochastic simulation that confirms that cytoplasmic microtubules can compete with flagella for a limited tubulin pool, showing that alteration of cytoplasmic microtubule severing could be sufficient to explain the effect of the pf15 mutations on flagellar length. PMID:25143397

  10. Using Modified Contour Deformable Model to Quantitatively Estimate Ultrasound Parameters for Osteoporosis Assessment

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Fu; Du, Yi-Chun; Tsai, Yi-Ting; Chen, Tainsong

    Osteoporosis is a systemic skeletal disease, which is characterized by low bone mass and micro-architectural deterioration of bone tissue, leading to bone fragility. Finding an effective method for prevention and early diagnosis of the disease is very important. Several parameters, including broadband ultrasound attenuation (BUA), speed of sound (SOS), and stiffness index (STI), have been used to measure the characteristics of bone tissues. In this paper, we proposed a method, namely modified contour deformable model (MCDM), bases on the active contour model (ACM) and active shape model (ASM) for automatically detecting the calcaneus contour from quantitative ultrasound (QUS) parametric images. The results show that the difference between the contours detected by the MCDM and the true boundary for the phantom is less than one pixel. By comparing the phantom ROIs, significant relationship was found between contour mean and bone mineral density (BMD) with R=0.99. The influence of selecting different ROI diameters (12, 14, 16 and 18 mm) and different region-selecting methods, including fixed region (ROI fix ), automatic circular region (ROI cir ) and calcaneal contour region (ROI anat ), were evaluated for testing human subjects. Measurements with large ROI diameters, especially using fixed region, result in high position errors (10-45%). The precision errors of the measured ultrasonic parameters for ROI anat are smaller than ROI fix and ROI cir . In conclusion, ROI anat provides more accurate measurement of ultrasonic parameters for the evaluation of osteoporosis and is useful for clinical application.

  11. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-17

    ... model used by the Center for Biologics Evaluation and Research (CBER) and suggestions for further...: Richard Forshee, Center for Biologics Evaluation and Research (HFM-210), Food and Drug Administration... disease computer simulation models to generate quantitative estimates of the benefits and risks...

  12. Quantitative model for inferring dynamic regulation of the tumour suppressor gene p53

    PubMed Central

    2010-01-01

    Background The availability of various "omics" datasets creates a prospect of performing the study of genome-wide genetic regulatory networks. However, one of the major challenges of using mathematical models to infer genetic regulation from microarray datasets is the lack of information for protein concentrations and activities. Most of the previous researches were based on an assumption that the mRNA levels of a gene are consistent with its protein activities, though it is not always the case. Therefore, a more sophisticated modelling framework together with the corresponding inference methods is needed to accurately estimate genetic regulation from "omics" datasets. Results This work developed a novel approach, which is based on a nonlinear mathematical model, to infer genetic regulation from microarray gene expression data. By using the p53 network as a test system, we used the nonlinear model to estimate the activities of transcription factor (TF) p53 from the expression levels of its target genes, and to identify the activation/inhibition status of p53 to its target genes. The predicted top 317 putative p53 target genes were supported by DNA sequence analysis. A comparison between our prediction and the other published predictions of p53 targets suggests that most of putative p53 targets may share a common depleted or enriched sequence signal on their upstream non-coding region. Conclusions The proposed quantitative model can not only be used to infer the regulatory relationship between TF and its down-stream genes, but also be applied to estimate the protein activities of TF from the expression levels of its target genes. PMID:20085646

  13. Mixed linear model approach for mapping quantitative trait loci underlying crop seed traits.

    PubMed

    Qi, T; Jiang, B; Zhu, Z; Wei, C; Gao, Y; Zhu, S; Xu, H; Lou, X

    2014-09-01

    The crop seed is a complex organ that may be composed of the diploid embryo, the triploid endosperm and the diploid maternal tissues. According to the genetic features of seed characters, two genetic models for mapping quantitative trait loci (QTLs) of crop seed traits are proposed, with inclusion of maternal effects, embryo or endosperm effects of QTL, environmental effects and QTL-by-environment (QE) interactions. The mapping population can be generated either from double back-cross of immortalized F2 (IF2) to the two parents, from random-cross of IF2 or from selfing of IF2 population. Candidate marker intervals potentially harboring QTLs are first selected through one-dimensional scanning across the whole genome. The selected candidate marker intervals are then included in the model as cofactors to control background genetic effects on the putative QTL(s). Finally, a QTL full model is constructed and model selection is conducted to eliminate false positive QTLs. The genetic main effects of QTLs, QE interaction effects and the corresponding P-values are computed by Markov chain Monte Carlo algorithm for Gaussian mixed linear model via Gibbs sampling. Monte Carlo simulations were performed to investigate the reliability and efficiency of the proposed method. The simulation results showed that the proposed method had higher power to accurately detect simulated QTLs and properly estimated effect of these QTLs. To demonstrate the usefulness, the proposed method was used to identify the QTLs underlying fiber percentage in an upland cotton IF2 population. A computer software, QTLNetwork-Seed, was developed for QTL analysis of seed traits.

  14. Analysis of Water Conflicts across Natural and Societal Boundaries: Integration of Quantitative Modeling and Qualitative Reasoning

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Balaram, P.; Islam, S.

    2009-12-01

    , the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.

  15. QSTR modeling for qualitative and quantitative toxicity predictions of diverse chemical pesticides in honey bee for regulatory purposes.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Basant, Nikita; Mohan, Dinesh

    2014-09-15

    Pesticides are designed toxic chemicals for specific purposes and can harm nontarget species as well. The honey bee is considered a nontarget test species for toxicity evaluation of chemicals. Global QSTR (quantitative structure-toxicity relationship) models were established for qualitative and quantitative toxicity prediction of pesticides in honey bee (Apis mellifera) based on the experimental toxicity data of 237 structurally diverse pesticides. Structural diversity of the chemical pesticides and nonlinear dependence in the toxicity data were evaluated using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) QSTR models were constructed for classification (two and four categories) and function optimization problems using the toxicity end point in honey bees. The predictive power of the QSTR models was tested through rigorous validation performed using the internal and external procedures employing a wide series of statistical checks. In complete data, the PNN-QSTR model rendered a classification accuracy of 96.62% (two-category) and 95.57% (four-category), while the GRNN-QSTR model yielded a correlation (R(2)) of 0.841 between the measured and predicted toxicity values with a mean squared error (MSE) of 0.22. The results suggest the appropriateness of the developed QSTR models for reliably predicting qualitative and quantitative toxicities of pesticides in honey bee. Both the PNN and GRNN based QSTR models constructed here can be useful tools in predicting the qualitative and quantitative toxicities of the new chemical pesticides for regulatory purposes.

  16. Parametric modeling for quantitative analysis of pulmonary structure to function relationships

    NASA Astrophysics Data System (ADS)

    Haider, Clifton R.; Bartholmai, Brian J.; Holmes, David R., III; Camp, Jon J.; Robb, Richard A.

    2005-04-01

    While lung anatomy is well understood, pulmonary structure-to-function relationships such as the complex elastic deformation of the lung during respiration are less well documented. Current methods for studying lung anatomy include conventional chest radiography, high-resolution computed tomography (CT scan) and magnetic resonance imaging with polarized gases (MRI scan). Pulmonary physiology can be studied using spirometry or V/Q nuclear medicine tests (V/Q scan). V/Q scanning and MRI scans may demonstrate global and regional function. However, each of these individual imaging methods lacks the ability to provide high-resolution anatomic detail, associated pulmonary mechanics and functional variability of the entire respiratory cycle. Specifically, spirometry provides only a one-dimensional gross estimate of pulmonary function, and V/Q scans have poor spatial resolution, reducing its potential for regional assessment of structure-to-function relationships. We have developed a method which utilizes standard clinical CT scanning to provide data for computation of dynamic anatomic parametric models of the lung during respiration which correlates high-resolution anatomy to underlying physiology. The lungs are segmented from both inspiration and expiration three-dimensional (3D) data sets and transformed into a geometric description of the surface of the lung. Parametric mapping of lung surface deformation then provides a visual and quantitative description of the mechanical properties of the lung. Any alteration in lung mechanics is manifest by alterations in normal deformation of the lung wall. The method produces a high-resolution anatomic and functional composite picture from sparse temporal-spatial methods which quantitatively illustrates detailed anatomic structure to pulmonary function relationships impossible for translational methods to provide.

  17. Simulation of water-use conservation scenarios for the Mississippi Delta using an existing regional groundwater flow model

    USGS Publications Warehouse

    Barlow, Jeannie R.B.; Clark, Brian R.

    2011-01-01

    The Mississippi River alluvial plain in northwestern Mississippi (referred to as the Delta), once a floodplain to the Mississippi River covered with hardwoods and marshland, is now a highly productive agricultural region of large economic importance to Mississippi. Water for irrigation is supplied primarily by the Mississippi River Valley alluvial aquifer, and although the alluvial aquifer has a large reserve, there is evidence that the current rate of water use from the alluvial aquifer is not sustainable. Using an existing regional groundwater flow model, conservation scenarios were developed for the alluvial aquifer underlying the Delta region in northwestern Mississippi to assess where the implementation of water-use conservation efforts would have the greatest effect on future water availability-either uniformly throughout the Delta, or focused on a cone of depression in the alluvial aquifer underlying the central part of the Delta. Five scenarios were simulated with the Mississippi Embayment Regional Aquifer Study groundwater flow model: (1) a base scenario in which water use remained constant at 2007 rates throughout the entire simulation; (2) a 5-percent 'Delta-wide' conservation scenario in which water use across the Delta was decreased by 5 percent; (3) a 5-percent 'cone-equivalent' conservation scenario in which water use within the area of the cone of depression was decreased by 11 percent (a volume equivalent to the 5-percent Delta-wide conservation scenario); (4) a 25-percent Delta-wide conservation scenario in which water use across the Delta was decreased by 25 percent; and (5) a 25-percent cone-equivalent conservation scenario in which water use within the area of the cone of depression was decreased by 55 percent (a volume equivalent to the 25-percent Delta-wide conservation scenario). The Delta-wide scenarios result in greater average water-level improvements (relative to the base scenario) for the entire Delta area than the cone

  18. Quantitative modeling of total ionizing dose reliability effects in device silicon dioxide layers

    NASA Astrophysics Data System (ADS)

    Rowsey, Nicole L.

    The electrical breakdown of oxides and oxide/semiconductor interfaces is one of the main reasons for device failure in integrated circuits, especially devices under high-stress conditions. One high-stress environment of interest is the space environment. All electronics are vulnerable to ionizing radiation; any high-energy particle that passes through an insulating layer will deposit unwanted charge there, causing shifts in device characteristics. Designing electronics for use in space can be a challenge, because much more energetic radiation exits in space than on Earth, as there is no atmosphere in space to collide with, and thereby reduce the energy of, energetic particles. Although oxide charging due to ionizing radiation creates well-known changes in device characteristics, or total ionizing dose effects, it is still poorly-understood exactly how these changes come about. There are many theories that draw upon a large body of both experimental work and, more recently, quantum-mechanical first principles calculations at the molecular level. This work uses FLOODS, a 3D object-oriented device simulator with multi-physics capability, to investigate these theories, by simulating oxide degradation in realistic device geometries, and comparing the subsequent degradation in device characteristics to experimental measurements. The charge trapping and defect-modulated transport models developed and implemented here have resulted in the first quantitative account of the enhanced low-dose-rate sensitivity effect, and are applicable in a comprehensive range of hydrogen environments. Measurements show that devices exposed to ionizing radiation at high dose rates exhibit less degradation that those exposed at low dose rates. Furthermore, the observed trend differs depending on the amount of hydrogen available before, during, and after irradiation. It is therefore important to understand and take into account the effects of dose rate and hydrogen when developing accelerated

  19. Modeling optical behavior of birefringent biological tissues for evaluation of quantitative polarized light microscopy

    NASA Astrophysics Data System (ADS)

    van Turnhout, Mark C.; Kranenbarg, Sander; van Leeuwen, Johan L.

    2009-09-01

    Quantitative polarized light microscopy (qPLM) is a popular tool for the investigation of birefringent architectures in biological tissues. Collagen, the most abundant protein in mammals, is such a birefringent material. Interpretation of results of qPLM in terms of collagen network architecture and anisotropy is challenging, because different collagen networks may yield equal qPLM results. We created a model and used the linear optical behavior of collagen to construct a Jones or Mueller matrix for a histological cartilage section in an optical qPLM train. Histological sections of tendon were used to validate the basic assumption of the model. Results show that information on collagen densities is needed for the interpretation of qPLM results in terms of collagen anisotropy. A parameter that is independent of the optical system and that measures collagen fiber anisotropy is introduced, and its physical interpretation is discussed. With our results, we can quantify which part of different qPLM results is due to differences in collagen densities and which part is due to changes in the collagen network. Because collagen fiber orientation and anisotropy are important for tissue function, these results can improve the biological and medical relevance of qPLM results.

  20. Gas chromatographic quantitative analysis of methanol in wine: operative conditions, optimization and calibration model choice.

    PubMed

    Caruso, Rosario; Gambino, Grazia Laura; Scordino, Monica; Sabatino, Leonardo; Traulo, Pasqualino; Gagliano, Giacomo

    2011-12-01

    The influence of the wine distillation process on methanol content has been determined by quantitative analysis using gas chromatographic flame ionization (GC-FID) detection. A comparative study between direct injection of diluted wine and injection of distilled wine was performed. The distillation process does not affect methanol quantification in wines in proportions higher than 10%. While quantification performed on distilled samples gives more reliable results, a screening method for wine injection after a 1:5 water dilution could be employed. The proposed technique was found to be a compromise between the time consuming distillation process and direct wine injection. In the studied calibration range, the stability of the volatile compounds in the reference solution is concentration-dependent. The stability is higher in the less concentrated reference solution. To shorten the operation time, a stronger temperature ramp and carrier flow rate was employed. With these conditions, helium consumption and column thermal stress were increased. However, detection limits, calibration limits, and analytical method performances are not affected substantially by changing from normal to forced GC conditions. Statistical data evaluation were made using both ordinary (OLS) and bivariate least squares (BLS) calibration models. Further confirmation was obtained that limit of detection (LOD) values, calculated according to the 3sigma approach, are lower than the respective Hubaux-Vos (H-V) calculation method. H-V LOD depends upon background noise, calibration parameters and the number of reference standard solutions employed in producing the calibration curve. These remarks are confirmed by both calibration models used.

  1. Establishment of Quantitative Severity Evaluation Model for Spinal Cord Injury by Metabolomic Fingerprinting

    PubMed Central

    Yang, Hao; Cohen, Mitchell Jay; Chen, Wei; Sun, Ming-Wei; Lu, Charles Damien

    2014-01-01

    Spinal cord injury (SCI) is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR) screening, we identified 15 metabolites that made up an “Eigen-metabolome” capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD–NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use. PMID:24727691

  2. Simultaneous estimation of multiple quantitative trait loci and growth curve parameters through hierarchical Bayesian modeling

    PubMed Central

    Sillanpää, M J; Pikkuhookana, P; Abrahamsson, S; Knürr, T; Fries, A; Lerceteau, E; Waldmann, P; García-Gil, M R

    2012-01-01

    A novel hierarchical quantitative trait locus (QTL) mapping method using a polynomial growth function and a multiple-QTL model (with no dependence in time) in a multitrait framework is presented. The method considers a population-based sample where individuals have been phenotyped (over time) with respect to some dynamic trait and genotyped at a given set of loci. A specific feature of the proposed approach is that, instead of an average functional curve, each individual has its own functional curve. Moreover, each QTL can modify the dynamic characteristics of the trait value of an individual through its influence on one or more growth curve parameters. Apparent advantages of the approach include: (1) assumption of time-independent QTL and environmental effects, (2) alleviating the necessity for an autoregressive covariance structure for residuals and (3) the flexibility to use variable selection methods. As a by-product of the method, heritabilities and genetic correlations can also be estimated for individual growth curve parameters, which are considered as latent traits. For selecting trait-associated loci in the model, we use a modified version of the well-known Bayesian adaptive shrinkage technique. We illustrate our approach by analysing a sub sample of 500 individuals from the simulated QTLMAS 2009 data set, as well as simulation replicates and a real Scots pine (Pinus sylvestris) data set, using temporal measurements of height as dynamic trait of interest. PMID:21792229

  3. Application of quantitative structure activity relationship (QSAR) models to predict ozone toxicity in the lung.

    PubMed

    Kafoury, Ramzi M; Huang, Ming-Ju

    2005-08-01

    The sequence of events leading to ozone-induced airway inflammation is not well known. To elucidate the molecular and cellular events underlying ozone toxicity in the lung, we hypothesized that lipid ozonation products (LOPs) generated by the reaction of ozone with unsaturated fatty acids in the epithelial lining fluid and cell membranes play a key role in mediating ozone-induced airway inflammation. To test our hypothesis, we ozonized 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylcholine (POPC) and generated LOPs. Confluent human bronchial epithelial cells were exposed to the derivatives of ozonized POPC-9-oxononanoyl, 9-hydroxy-9-hydroperoxynonanoyl, and 8-(5-octyl-1,2,4-trioxolan-3-yl-)octanoyl-at a concentration of 10 muM, and the activity of phospholipases A2 (PLA2), C (PLC), and D (PLD) was measured (1, 0.5, and 1 h, respectively). Quantitative structure-activity relationship (QSAR) models were utilized to predict the biological activity of LOPs in airway epithelial cells. The QSAR results showed a strong correlation between experimental and computed activity (r = 0.97, 0.98, 0.99, for PLA2, PLC, and PLD, respectively). The results indicate that QSAR models can be utilized to predict the biological activity of the various ozone-derived LOP species in the lung.

  4. Overcoming pain thresholds with multilevel models-an example using quantitative sensory testing (QST) data.

    PubMed

    Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris

    2015-01-01

    The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.

  5. Lower-order effects adjustment in quantitative traits model-based multifactor dimensionality reduction.

    PubMed

    Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van

    2012-01-01

    Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions

  6. Hydrologic connectivity: Quantitative assessments of hydrologic-enforced drainage structures in an elevation model

    USGS Publications Warehouse

    Poppenga, Sandra; Worstell, Bruce B.

    2016-01-01

    Elevation data derived from light detection and ranging present challenges for hydrologic modeling as the elevation surface includes bridge decks and elevated road features overlaying culvert drainage structures. In reality, water is carried through these structures; however, in the elevation surface these features impede modeled overland surface flow. Thus, a hydrologically-enforced elevation surface is needed for hydrodynamic modeling. In the Delaware River Basin, hydrologic-enforcement techniques were used to modify elevations to simulate how constructed drainage structures allow overland surface flow. By calculating residuals between unfilled and filled elevation surfaces, artificially pooled depressions that formed upstream of constructed drainage structure features were defined, and elevation values were adjusted by generating transects at the location of the drainage structures. An assessment of each hydrologically-enforced drainage structure was conducted using field-surveyed culvert and bridge coordinates obtained from numerous public agencies, but it was discovered the disparate drainage structure datasets were not comprehensive enough to assess all remotely located depressions in need of hydrologic-enforcement. Alternatively, orthoimagery was interpreted to define drainage structures near each depression, and these locations were used as reference points for a quantitative hydrologic-enforcement assessment. The orthoimagery-interpreted reference points resulted in a larger corresponding sample size than the assessment between hydrologic-enforced transects and field-surveyed data. This assessment demonstrates the viability of rules-based hydrologic-enforcement that is needed to achieve hydrologic connectivity, which is valuable for hydrodynamic models in sensitive coastal regions. Hydrologic-enforced elevation data are also essential for merging with topographic/bathymetric elevation data that extend over vulnerable urbanized areas and dynamic coastal

  7. A probabilistic quantitative risk assessment model for the long-term work zone crashes.

    PubMed

    Meng, Qiang; Weng, Jinxian; Qu, Xiaobo

    2010-11-01

    Work zones especially long-term work zones increase traffic conflicts and cause safety problems. Proper casualty risk assessment for a work zone is of importance for both traffic safety engineers and travelers. This paper develops a novel probabilistic quantitative risk assessment (QRA) model to evaluate the casualty risk combining frequency and consequence of all accident scenarios triggered by long-term work zone crashes. The casualty risk is measured by the individual risk and societal risk. The individual risk can be interpreted as the frequency of a driver/passenger being killed or injured, and the societal risk describes the relation between frequency and the number of casualties. The proposed probabilistic QRA model consists of the estimation of work zone crash frequency, an event tree and consequence estimation models. There are seven intermediate events--age (A), crash unit (CU), vehicle type (VT), alcohol (AL), light condition (LC), crash type (CT) and severity (S)--in the event tree. Since the estimated value of probability for some intermediate event may have large uncertainty, the uncertainty can thus be characterized by a random variable. The consequence estimation model takes into account the combination effects of speed and emergency medical service response time (ERT) on the consequence of work zone crash. Finally, a numerical example based on the Southeast Michigan work zone crash data is carried out. The numerical results show that there will be a 62% decrease of individual fatality risk and 44% reduction of individual injury risk if the mean travel speed is slowed down by 20%. In addition, there will be a 5% reduction of individual fatality risk and 0.05% reduction of individual injury risk if ERT is reduced by 20%. In other words, slowing down speed is more effective than reducing ERT in the casualty risk mitigation.

  8. Rock physics models for constraining quantitative interpretation of ultrasonic data for biofilm growth and development

    NASA Astrophysics Data System (ADS)

    Alhadhrami, Fathiya Mohammed

    This study examines the use of rock physics modeling for quantitative interpretation of seismic data in the context of microbial growth and biofilm formation in unconsolidated sediment. The impetus for this research comes from geophysical experiments by Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012). These studies observed that microbial growth has a small effect on P-wave velocities (VP) but a large effect on seismic amplitudes. Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012) speculated that the amplitude variations were due to a combination of rock mechanical changes from accumulation of microbial growth related features such as biofilms. A more definite conclusion can be drawn by developing rock physics models that connect rock properties to seismic amplitudes. The primary objective of this work is to provide an explanation for high amplitude attenuation due to biofilm growth. The results suggest that biofilm formation in the Davis et al. (2010) experiment exhibit two growth styles: a loadbearing style where biofilm behaves like an additional mineral grain and a non-loadbearing mode where the biofilm grows into the pore spaces. In the loadbearing mode, the biofilms contribute to the stiffness of the sediments. We refer to this style as "filler." In the non-loadbearing mode, the biofilms contribute only to change in density of sediments without affecting their strength. We refer to this style of microbial growth as "mushroom." Both growth styles appear to be changing permeability more than the moduli or the density. As the result, while the VP velocity remains relatively unchanged, the amplitudes can change significantly depending on biofilm saturation. Interpreting seismic data from biofilm growths in term of rock physics models provide a greater insight into the sediment-fluid interaction. The models in turn can be used to understand microbial enhanced oil recovery and in assisting in solving environmental issues such as creating bio

  9. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  10. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  11. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  12. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  13. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  14. A quantitative cell modeling and wound-healing analysis based on the Electric Cell-substrate Impedance Sensing (ECIS) method.

    PubMed

    Yang, Jen Ming; Chen, Szi-Wen; Yang, Jhe-Hao; Hsu, Chih-Chin; Wang, Jong-Shyan

    2016-02-01

    In this paper, a quantitative modeling and wound-healing analysis of fibroblast and human keratinocyte cells is presented. Our study was conducted using a continuous cellular impedance monitoring technique, dubbed Electric Cell-substrate Impedance Sensing (ECIS). In fact, we have constructed a mathematical model for quantitatively analyzing the cultured cell growth using the time series data directly derived by ECIS in a previous work. In this study, the applicability of our model into the keratinocyte cell growth modeling analysis was assessed first. In addition, an electrical "wound-healing" assay was used as a means to evaluate the healing process of keratinocyte cells at a variety of pressures. Two innovative and new-defined indicators, dubbed cell power and cell electroactivity, respectively, were developed for quantitatively characterizing the biophysical behavior of cells. We then employed the wavelet transform method to perform a multi-scale analysis so the cell power and cell electroactivity across multiple observational time scales may be captured. Numerical results indicated that our model can well fit the data measured from the keratinocyte cell culture for cell growth modeling analysis. Also, the results produced by our quantitative analysis showed that the wound healing process was the fastest at the negative pressure of 125mmHg, which consistently agreed with the qualitative analysis results reported in previous works.

  15. The use of electromagnetic induction methods for establishing quantitative permafrost models in West Greenland

    NASA Astrophysics Data System (ADS)

    Ingeman-Nielsen, Thomas; Brandt, Inooraq

    2010-05-01

    permafrozen sediments is generally not available in Greenland, and mobilization costs are therefore considerable thus limiting the use of geotechnical borings to larger infrastructure and construction projects. To overcome these problems, we have tested the use of shallow Transient ElectroMagnetic (TEM) measurements, to provide constraints in terms of depth to and resistivity of the conductive saline layer. We have tested such a setup at two field sites in the Ilulissat area (mid-west Greenland), one with available borehole information (site A), the second without (site C). VES and TEM soundings were collected at each site and the respective data sets subsequently inverted using a mutually constrained inversion scheme. At site A, the TEM measurements (20x20m square loop, in-loop configuration) show substantial and repeatable negative amplitude segments, and therefore it has not presently been possible to provide a quantitative interpretation for this location. Negative segments are typically a sign of Induced Polarization or cultural effects. Forward modeling based on inversion of the VES data constrained with borehole information has indicated that IP effects could indeed be the cause of the observed anomaly, although such effects are not normally expected in permafrost or saline deposits. Data from site C has shown that jointly inverting the TEM and VES measurements does provide well determined estimates for all layer parameters except the thickness of the active layer and resistivity of the bedrock. The active layer thickness may be easily probed to provide prior information on this parameter, and the bedrock resistivity is of limited interest in technical applications. Although no confirming borehole information is available at this site, these results indicate that joint or mutually constrained inversion of TEM and VES data is feasible and that this setup may provide a fast and cost effective method for establishing quantitative interpretations of permafrost structure in

  16. Fechner's law in metacognition: A quantitative model of visual working memory confidence.

    PubMed

    van den Berg, Ronald; Yoo, Aspen H; Ma, Wei Ji

    2017-03-01

    Although visual working memory (VWM) has been studied extensively, it is unknown how people form confidence judgments about their memories. Peirce (1878) speculated that Fechner's law-which states that sensation is proportional to the logarithm of stimulus intensity-might apply to confidence reports. Based on this idea, we hypothesize that humans map the precision of their VWM contents to a confidence rating through Fechner's law. We incorporate this hypothesis into the best available model of VWM encoding and fit it to data from a delayed-estimation experiment. The model provides an excellent account of human confidence rating distributions as well as the relation between performance and confidence. Moreover, the best-fitting mapping in a model with a highly flexible mapping closely resembles the logarithmic mapping, suggesting that no alternative mapping exists that accounts better for the data than Fechner's law. We propose a neural implementation of the model and find that this model also fits the behavioral data well. Furthermore, we find that jointly fitting memory errors and confidence ratings boosts the power to distinguish previously proposed VWM encoding models by a factor of 5.99 compared to fitting only memory errors. Finally, we show that Fechner's law also accounts for metacognitive judgments in a word recognition memory task, which is a first indication that it may be a general law in metacognition. Our work presents the first model to jointly account for errors and confidence ratings in VWM and could lay the groundwork for understanding the computational mechanisms of metacognition. (PsycINFO Database Record

  17. Chronic Spinal Compression Model in Minipigs: A Systematic Behavioral, Qualitative, and Quantitative Neuropathological Study

    PubMed Central

    Navarro, Roman; Juhas, Stefan; Keshavarzi, Sassan; Juhasova, Jana; Motlik, Jan; Johe, Karl; Marsala, Silvia; Scadeng, Miriam; Lazar, Peter; Tomori, Zoltan; Schulteis, Gery; Beattie, Michael; Ciacci, Joseph D.

    2012-01-01

    Abstract The goal of the present study was to develop a porcine spinal cord injury (SCI) model, and to describe the neurological outcome and characterize the corresponding quantitative and qualitative histological changes at 4–9 months after injury. Adult Gottingen-Minnesota minipigs were anesthetized and placed in a spine immobilization frame. The exposed T12 spinal segment was compressed in a dorso-ventral direction using a 5-mm-diameter circular bar with a progressively increasing peak force (1.5, 2.0, or 2.5 kg) at a velocity of 3 cm/sec. During recovery, motor and sensory function were periodically monitored. After survival, the animals were perfusion fixed and the extent of local SCI was analyzed by (1) post-mortem MRI analysis of dissected spinal cords, (2) qualitative and quantitative analysis of axonal survival at the epicenter of injury, and (3) defining the presence of local inflammatory changes, astrocytosis, and schwannosis. Following 2.5-kg spinal cord compression the animals demonstrated a near complete loss of motor and sensory function with no recovery over the next 4–9 months. Those that underwent spinal cord compression with 2 kg force developed an incomplete injury with progressive partial neurological recovery characterized by a restricted ability to stand and walk. Animals injured with a spinal compression force of 1.5 kg showed near normal ambulation 10 days after injury. In fully paralyzed animals (2.5 kg), MRI analysis demonstrated a loss of spinal white matter integrity and extensive septal cavitations. A significant correlation between the magnitude of loss of small and medium-sized myelinated axons in the ventral funiculus and neurological deficits was identified. These data, demonstrating stable neurological deficits in severely injured animals, similarities of spinal pathology to humans, and relatively good post-injury tolerance of this strain of minipigs to spinal trauma, suggest that this model can successfully be used

  18. Quantitative Evaluation of Models for Solvent-based, On-column Focusing in Liquid Chromatography

    PubMed Central

    Groskreutz, Stephen R.; Weber, Stephen G.

    2015-01-01

    On-column focusing or preconcentration is a well-known approach to increase concentration sensitivity by generating transient conditions during the injection that result in high solute retention. Preconcentration results from two phenomena: 1) solutes are retained as they enter the column. Their velocities are k′-dependent and lower than the mobile phase velocity and 2) zones are compressed due to the step-gradient resulting from the higher elution strength mobile phase passing through the solute zones. Several workers have derived the result that the ratio of the eluted zone width (in time) to the injected time width is the ratio k2/k1 where k1 is the retention factor of a solute in the sample solvent and k2 is the retention factor in the mobile phase (isocratic). Mills et al. proposed a different factor. To date, neither of the models has been adequately tested. The goal of this work was to evaluate quantitatively these two models. We used n-alkyl esters of p-hydroxybenzoic acid (parabens) as solutes. By making large injections to create obvious volume overload, we could measure accurately the ratio of widths (eluted/injected) over a range of values of k1 and k2. The Mills et al. model does not fit the data. The data are in general agreement with the factor k2/k1, but focusing is about 10% better than the prediction. We attribute the extra focusing to the fact that the second, compression, phenomenon provides a narrower zone than that expected for the passage of a step gradient through the zone. PMID:26210110

  19. A rodent model of traumatic stress induces lasting sleep and quantitative electroencephalographic disturbances.

    PubMed

    Nedelcovych, Michael T; Gould, Robert W; Zhan, Xiaoyan; Bubser, Michael; Gong, Xuewen; Grannan, Michael; Thompson, Analisa T; Ivarsson, Magnus; Lindsley, Craig W; Conn, P Jeffrey; Jones, Carrie K

    2015-03-18

    Hyperarousal and sleep disturbances are common, debilitating symptoms of post-traumatic stress disorder (PTSD). PTSD patients also exhibit abnormalities in quantitative electroencephalography (qEEG) power spectra during wake as well as rapid eye movement (REM) and non-REM (NREM) sleep. Selective serotonin reuptake inhibitors (SSRIs), the first-line pharmacological treatment for PTSD, provide modest remediation of the hyperarousal symptoms in PTSD patients, but have little to no effect on the sleep-wake architecture deficits. Development of novel therapeutics for these sleep-wake architecture deficits is limited by a lack of relevant animal models. Thus, the present study investigated whether single prolonged stress (SPS), a rodent model of traumatic stress, induces PTSD-like sleep-wake and qEEG spectral power abnormalities that correlate with changes in central serotonin (5-HT) and neuropeptide Y (NPY) signaling in rats. Rats were implanted with telemetric recording devices to continuously measure EEG before and after SPS treatment. A second cohort of rats was used to measure SPS-induced changes in plasma corticosterone, 5-HT utilization, and NPY expression in brain regions that comprise the neural fear circuitry. SPS caused sustained dysregulation of NREM and REM sleep, accompanied by state-dependent alterations in qEEG power spectra indicative of cortical hyperarousal. These changes corresponded with acute induction of the corticosterone receptor co-chaperone FK506-binding protein 51 and delayed reductions in 5-HT utilization and NPY expression in the amygdala. SPS represents a preclinical model of PTSD-related sleep-wake and qEEG disturbances with underlying alterations in neurotransmitter systems known to modulate both sleep-wake architecture and the neural fear circuitry.

  20. Quantitative high-throughput screening for chemical toxicity in a population-based in vitro model.

    PubMed

    Lock, Eric F; Abdo, Nour; Huang, Ruili; Xia, Menghang; Kosyk, Oksana; O'Shea, Shannon H; Zhou, Yi-Hui; Sedykh, Alexander; Tropsha, Alexander; Austin, Christopher P; Tice, Raymond R; Wright, Fred A; Rusyn, Ivan

    2012-04-01

    A shift in toxicity testing from in vivo to in vitro may efficiently prioritize compounds, reveal new mechanisms, and enable predictive modeling. Quantitative high-throughput screening (qHTS) is a major source of data for computational toxicology, and our goal in this study was to aid in the development of predictive in vitro models of chemical-induced toxicity, anchored on interindividual genetic variability. Eighty-one human lymphoblast cell lines from 27 Centre d'Etude du Polymorphisme Humain trios were exposed to 240 chemical substances (12 concentrations, 0.26nM-46.0μM) and evaluated for cytotoxicity and apoptosis. qHTS screening in the genetically defined population produced robust and reproducible results, which allowed for cross-compound, cross-assay, and cross-individual comparisons. Some compounds were cytotoxic to all cell types at similar concentrations, whereas others exhibited interindividual differences in cytotoxicity. Specifically, the qHTS in a population-based human in vitro model system has several unique aspects that are of utility for toxicity testing, chemical prioritization, and high-throughput risk assessment. First, standardized and high-quality concentration-response profiling, with reproducibility confirmed by comparison with previous experiments, enables prioritization of chemicals for variability in interindividual range in cytotoxicity. Second, genome-wide association analysis of cytotoxicity phenotypes allows exploration of the potential genetic determinants of interindividual variability in toxicity. Furthermore, highly significant associations identified through the analysis of population-level correlations between basal gene expression variability and chemical-induced toxicity suggest plausible mode of action hypotheses for follow-up analyses. We conclude that as the improved resolution of genetic profiling can now be matched with high-quality in vitro screening data, the evaluation of the toxicity pathways and the effects of

  1. Effect of platform, reference material, and quantification model on enumeration of Enterococcus by quantitative PCR methods

    EPA Science Inventory

    Quantitative polymerase chain reaction (qPCR) is increasingly being used for the quantitative detection of fecal indicator bacteria in beach water. QPCR allows for same-day health warnings, and its application is being considered as an optionn for recreational water quality testi...

  2. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    PubMed

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  3. Quantitative constraint-based computational model of tumor-to-stroma coupling via lactate shuttle

    PubMed Central

    Capuani, Fabrizio; De Martino, Daniele; Marinari, Enzo; De Martino, Andrea

    2015-01-01

    Cancer cells utilize large amounts of ATP to sustain growth, relying primarily on non-oxidative, fermentative pathways for its production. In many types of cancers this leads, even in the presence of oxygen, to the secretion of carbon equivalents (usually in the form of lactate) in the cell’s surroundings, a feature known as the Warburg effect. While the molecular basis of this phenomenon are still to be elucidated, it is clear that the spilling of energy resources contributes to creating a peculiar microenvironment for tumors, possibly characterized by a degree of toxicity. This suggests that mechanisms for recycling the fermentation products (e.g. a lactate shuttle) may be active, effectively inducing a mutually beneficial metabolic coupling between aberrant and non-aberrant cells. Here we analyze this scenario through a large-scale in silico metabolic model of interacting human cells. By going beyond the cell-autonomous description, we show that elementary physico-chemical constraints indeed favor the establishment of such a coupling under very broad conditions. The characterization we obtained by tuning the aberrant cell’s demand for ATP, amino-acids and fatty acids and/or the imbalance in nutrient partitioning provides quantitative support to the idea that synergistic multi-cell effects play a central role in cancer sustainment. PMID:26149467

  4. Assessing the toxic effects of ethylene glycol ethers using Quantitative Structure Toxicity Relationship models

    SciTech Connect

    Ruiz, Patricia; Mumtaz, Moiz; Gombar, Vijay

    2011-07-15

    Experimental determination of toxicity profiles consumes a great deal of time, money, and other resources. Consequently, businesses, societies, and regulators strive for reliable alternatives such as Quantitative Structure Toxicity Relationship (QSTR) models to fill gaps in toxicity profiles of compounds of concern to human health. The use of glycol ethers and their health effects have recently attracted the attention of international organizations such as the World Health Organization (WHO). The board members of Concise International Chemical Assessment Documents (CICAD) recently identified inadequate testing as well as gaps in toxicity profiles of ethylene glycol mono-n-alkyl ethers (EGEs). The CICAD board requested the ATSDR Computational Toxicology and Methods Development Laboratory to conduct QSTR assessments of certain specific toxicity endpoints for these chemicals. In order to evaluate the potential health effects of EGEs, CICAD proposed a critical QSTR analysis of the mutagenicity, carcinogenicity, and developmental effects of EGEs and other selected chemicals. We report here results of the application of QSTRs to assess rodent carcinogenicity, mutagenicity, and developmental toxicity of four EGEs: 2-methoxyethanol, 2-ethoxyethanol, 2-propoxyethanol, and 2-butoxyethanol and their metabolites. Neither mutagenicity nor carcinogenicity is indicated for the parent compounds, but these compounds are predicted to be developmental toxicants. The predicted toxicity effects were subjected to reverse QSTR (rQSTR) analysis to identify structural attributes that may be the main drivers of the developmental toxicity potential of these compounds.

  5. Synthesis, photodynamic activity, and quantitative structure-activity relationship modelling of a series of BODIPYs.

    PubMed

    Caruso, Enrico; Gariboldi, Marzia; Sangion, Alessandro; Gramatica, Paola; Banfi, Stefano

    2017-02-01

    Here we report the synthesis of eleven new BODIPYs (14-24) characterized by the presence of an aromatic ring on the 8 (meso) position and of iodine atoms on the pyrrolic 2,6 positions. These molecules, together with twelve BODIPYs already reported by us (1-12), represent a large panel of BODIPYs showing different atoms or groups as substituent of the aromatic moiety. Two physico-chemical features ((1)O2 generation rate and lipophilicity), which can play a fundamental role in the outcome as photosensitizers, have been studied. The in vitro photo-induced cell-killing efficacy of 23 PSs was studied on the SKOV3 cell line treating the cells for 24h in the dark then irradiating for 2h with a green LED device (fluence 25.2J/cm(2)). The cell-killing efficacy was assessed with the MTT test and compared with that one of meso un-substituted compound (13). In order to understand the possible effect of the substituents, a predictive quantitative structure-activity relationship (QSAR) regression model, based on theoretical holistic molecular descriptors, was developed. The results clearly indicate that the presence of an aromatic ring is fundamental for an excellent photodynamic response, whereas the electronic effects and the position of the substituents on the aromatic ring do not influence the photodynamic efficacy.

  6. Quantitative Proteomic Profiling of Low-Dose Ionizing Radiation Effects in a Human Skin Model

    PubMed Central

    Hengel, Shawna M.; Aldrich, Joshua T.; Waters, Katrina M.; Pasa-Tolic, Ljiljana; Stenoien, David L.

    2014-01-01

    To assess responses to low-dose ionizing radiation (LD-IR) exposures potentially encountered during medical diagnostic procedures, nuclear accidents or terrorist acts, a quantitative proteomic approach was used to identify changes in protein abundance in a reconstituted human skin tissue model treated with 0.1 Gy of ionizing radiation. To improve the dynamic range of the assay, subcellular fractionation was employed to remove highly abundant structural proteins and to provide insight into radiation-induced alterations in protein localization. Relative peptide quantification across cellular fractions, control and irradiated samples was performing using 8-plex iTRAQ labeling followed by online two-dimensional nano-scale liquid chromatography and high resolution MS/MS analysis. A total of 107 proteins were detected with statistically significant radiation-induced change in abundance (>1.5 fold) and/or subcellular localization compared to controls. The top biological pathways identified using bioinformatics include organ development, anatomical structure formation and the regulation of actin cytoskeleton. From the proteomic data, a change in proteolytic processing and subcellular localization of the skin barrier protein, filaggrin, was identified, and the results were confirmed by western blotting. This data indicate post-transcriptional regulation of protein abundance, localization and proteolytic processing playing an important role in regulating radiation response in human tissues. PMID:28250387

  7. Quantitative profiling of brain lipid raft proteome in a mouse model of fragile X syndrome.

    PubMed

    Kalinowska, Magdalena; Castillo, Catherine; Francesconi, Anna

    2015-01-01

    Fragile X Syndrome, a leading cause of inherited intellectual disability and autism, arises from transcriptional silencing of the FMR1 gene encoding an RNA-binding protein, Fragile X Mental Retardation Protein (FMRP). FMRP can regulate the expression of approximately 4% of brain transcripts through its role in regulation of mRNA transport, stability and translation, thus providing a molecular rationale for its potential pleiotropic effects on neuronal and brain circuitry function. Several intracellular signaling pathways are dysregulated in the absence of FMRP suggesting that cellular deficits may be broad and could result in homeostatic changes. Lipid rafts are specialized regions of the plasma membrane, enriched in cholesterol and glycosphingolipids, involved in regulation of intracellular signaling. Among transcripts targeted by FMRP, a subset encodes proteins involved in lipid biosynthesis and homeostasis, dysregulation of which could affect the integrity and function of lipid rafts. Using a quantitative mass spectrometry-based approach we analyzed the lipid raft proteome of Fmr1 knockout mice, an animal model of Fragile X syndrome, and identified candidate proteins that are differentially represented in Fmr1 knockout mice lipid rafts. Furthermore, network analysis of these candidate proteins reveals connectivity between them and predicts functional connectivity with genes encoding components of myelin sheath, axonal processes and growth cones. Our findings provide insight to aid identification of molecular and cellular dysfunctions arising from Fmr1 silencing and for uncovering shared pathologies between Fragile X syndrome and other autism spectrum disorders.

  8. Quantitation and pharmacokinetic modeling of therapeutic antibody quality attributes in human studies

    PubMed Central

    Li, Yinyin; Monine, Michael; Huang, Yu; Swann, Patrick; Nestorov, Ivan; Lyubarskaya, Yelena

    2016-01-01

    ABSTRACT A thorough understanding of drug metabolism and disposition can aid in the assessment of efficacy and safety. However, analytical methods used in pharmacokinetics (PK) studies of protein therapeutics are usually based on ELISA, and therefore can provide a limited perspective on the quality of the drug in concentration measurements. Individual post-translational modifications (PTMs) of protein therapeutics are rarely considered for PK analysis, partly because it is technically difficult to recover and quantify individual protein variants from biological fluids. Meanwhile, PTMs may be directly linked to variations in drug efficacy and safety, and therefore understanding of clearance and metabolism of biopharmaceutical protein variants during clinical studies is an important consideration. To address such challenges, we developed an affinity-purification procedure followed by peptide mapping with mass spectrometric detection, which can profile multiple quality attributes of therapeutic antibodies recovered from patient sera. The obtained data enable quantitative modeling, which allows for simulation of the PK of different individual PTMs or attribute levels in vivo and thus facilitate the assessment of quality attributes impact in vivo. Such information can contribute to the product quality attribute risk assessment during manufacturing process development and inform appropriate process control strategy. PMID:27216574

  9. Modeling development and quantitative trait mapping reveal independent genetic modules for leaf size and shape.

    PubMed

    Baker, Robert L; Leong, Wen Fung; Brock, Marcus T; Markelz, R J Cody; Covington, Michael F; Devisetty, Upendra K; Edwards, Christine E; Maloof, Julin; Welch, Stephen; Weinig, Cynthia

    2015-10-01

    Improved predictions of fitness and yield may be obtained by characterizing the genetic controls and environmental dependencies of organismal ontogeny. Elucidating the shape of growth curves may reveal novel genetic controls that single-time-point (STP) analyses do not because, in theory, infinite numbers of growth curves can result in the same final measurement. We measured leaf lengths and widths in Brassica rapa recombinant inbred lines (RILs) throughout ontogeny. We modeled leaf growth and allometry as function valued traits (FVT), and examined genetic correlations between these traits and aspects of phenology, physiology, circadian rhythms and fitness. We used RNA-seq to construct a SNP linkage map and mapped trait quantitative trait loci (QTL). We found genetic trade-offs between leaf size and growth rate FVT and uncovered differences in genotypic and QTL correlations involving FVT vs STPs. We identified leaf shape (allometry) as a genetic module independent of length and width and identified selection on FVT parameters of development. Leaf shape is associated with venation features that affect desiccation resistance. The genetic independence of leaf shape from other leaf traits may therefore enable crop optimization in leaf shape without negative effects on traits such as size, growth rate, duration or gas exchange.

  10. Quantitative trait locus analysis of symbiotic nitrogen fixation activity in the model legume Lotus japonicus.

    PubMed

    Tominaga, Akiyoshi; Gondo, Takahiro; Akashi, Ryo; Zheng, Shao-Hui; Arima, Susumu; Suzuki, Akihiro

    2012-05-01

    Many legumes form nitrogen-fixing root nodules. An elevation of nitrogen fixation in such legumes would have significant implications for plant growth and biomass production in agriculture. To identify the genetic basis for the regulation of nitrogen fixation, quantitative trait locus (QTL) analysis was conducted with recombinant inbred lines derived from the cross Miyakojima MG-20 × Gifu B-129 in the model legume Lotus japonicus. This population was inoculated with Mesorhizobium loti MAFF303099 and grown for 14 days in pods containing vermiculite. Phenotypic data were collected for acetylene reduction activity (ARA) per plant (ARA/P), ARA per nodule weight (ARA/NW), ARA per nodule number (ARA/NN), NN per plant, NW per plant, stem length (SL), SL without inoculation (SLbac-), shoot dry weight without inoculation (SWbac-), root length without inoculation (RLbac-), and root dry weight (RWbac-), and finally 34 QTLs were identified. ARA/P, ARA/NN, NW, and SL showed strong correlations and QTL co-localization, suggesting that several plant characteristics important for symbiotic nitrogen fixation are controlled by the same locus. QTLs for ARA/P, ARA/NN, NW, and SL, co-localized around marker TM0832 on chromosome 4, were also co-localized with previously reported QTLs for seed mass. This is the first report of QTL analysis for symbiotic nitrogen fixation activity traits.

  11. Quantitative studies of animal colour constancy: using the chicken as model.

    PubMed

    Olsson, Peter; Wilby, David; Kelber, Almut

    2016-05-11

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations.

  12. A quantitative microbiological exposure assessment model for Bacillus cereus in REPFEDs.

    PubMed

    Daelman, Jeff; Membré, Jeanne-Marie; Jacxsens, Liesbeth; Vermeulen, An; Devlieghere, Frank; Uyttendaele, Mieke

    2013-09-16

    One of the pathogens of concern in refrigerated and processed foods of extended durability (REPFED) is psychrotrophic Bacillus cereus, because of its ability to survive pasteurisation and grow at low temperatures. In this study a quantitative microbiological exposure assessment (QMEA) of psychrotrophic B. cereus in REPFEDs is presented. The goal is to quantify (i) the prevalence and concentration of B. cereus during production and shelf life, (ii) the number of packages with potential emetic toxin formation and (iii) the impact of different processing steps and consumer behaviour on the exposure to B. cereus from REPFEDs. The QMEA comprises the entire production and distribution process, from raw materials over pasteurisation and up to the moment it is consumed or discarded. To model this process the modular process risk model (MPRM) was used (Nauta, 2002). The product life was divided into nine modules, each module corresponding to a basic process: (1) raw material contamination, (2) cross contamination during handling, (3) inactivation during preparation, (4) growth during intermediate storage, (5) partitioning of batches in portions, (6) mixing portions to create the product, (7) recontamination during assembly and packaging, (8) inactivation during pasteurisation and (9) growth during shelf life. Each of the modules was modelled and built using a combination of newly gathered and literature data, predictive models and expert opinions. Units (batch/portion/package) with a B. cereus concentration of 10(5)CFU/g or more were considered 'risky' units. Results show that the main drivers of variability and uncertainty are consumer behaviour, strain variability and modelling error. The prevalence of B. cereus in the final products is estimated at 48.6% (±0.01%) and the number of packs with too high B. cereus counts at the moment of consumption is estimated at 4750 packs per million (0.48%). Cold storage at retail and consumer level is vital in limiting the exposure

  13. Quantitative microbial risk assessment models for consumption of raw vegetables irrigated with reclaimed water.

    PubMed

    Hamilton, Andrew J; Stagnitti, Frank; Premier, Robert; Boland, Anne-Maree; Hale, Glenn

    2006-05-01

    Quantitative microbial risk assessment models for estimating the annual risk of enteric virus infection associated with consuming raw vegetables that have been overhead irrigated with nondisinfected secondary treated reclaimed water were constructed. We ran models for several different scenarios of crop type, viral concentration in effluent, and time since last irrigation event. The mean annual risk of infection was always less for cucumber than for broccoli, cabbage, or lettuce. Across the various crops, effluent qualities, and viral decay rates considered, the annual risk of infection ranged from 10(-3) to 10(-1) when reclaimed-water irrigation ceased 1 day