Sample records for quantitative models describing

  1. Challenges in Developing Models Describing Complex Soil Systems

    NASA Astrophysics Data System (ADS)

    Simunek, J.; Jacques, D.

    2014-12-01

    Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.

  2. A quantitative systems pharmacology model of blood coagulation network describes in vivo biomarker changes in non-bleeding subjects.

    PubMed

    Lee, D; Nayak, S; Martin, S W; Heatherington, A C; Vicini, P; Hua, F

    2016-12-01

    Essentials Baseline coagulation activity can be detected in non-bleeding state by in vivo biomarker levels. A detailed mathematical model of coagulation was developed to describe the non-bleeding state. Optimized model described in vivo biomarkers with recombinant activated factor VII treatment. Sensitivity analysis predicted prothrombin fragment 1 + 2 and D-dimer are regulated differently. Background Prothrombin fragment 1 + 2 (F 1 + 2 ), thrombin-antithrombin III complex (TAT) and D-dimer can be detected in plasma from non-bleeding hemostatically normal subjects or hemophilic patients. They are often used as safety or pharmacodynamic biomarkers for hemostatis-modulating therapies in the clinic, and provide insights into in vivo coagulation activity. Objectives To develop a quantitative systems pharmacology (QSP) model of the blood coagulation network to describe in vivo biomarkers, including F 1 + 2 , TAT, and D-dimer, under non-bleeding conditions. Methods The QSP model included intrinsic and extrinsic coagulation pathways, platelet activation state-dependent kinetics, and a two-compartment pharmacokinetics model for recombinant activated factor VII (rFVIIa). Literature data on F 1 + 2 and D-dimer at baseline and changes with rFVIIa treatment were used for parameter optimization. Multiparametric sensitivity analysis (MPSA) was used to understand key proteins that regulate F 1 + 2 , TAT and D-dimer levels. Results The model was able to describe tissue factor (TF)-dependent baseline levels of F 1 + 2 , TAT and D-dimer in a non-bleeding state, and their increases in hemostatically normal subjects and hemophilic patients treated with different doses of rFVIIa. The amount of TF required is predicted to be very low in a non-bleeding state. The model also predicts that these biomarker levels will be similar in hemostatically normal subjects and hemophilic patients. MPSA revealed that F 1 + 2 and TAT levels are highly correlated, and that D-dimer is

  3. A model describing vestibular detection of body sway motion.

    NASA Technical Reports Server (NTRS)

    Nashner, L. M.

    1971-01-01

    An experimental technique was developed which facilitated the formulation of a quantitative model describing vestibular detection of body sway motion in a postural response mode. All cues, except vestibular ones, which gave a subject an indication that he was beginning to sway, were eliminated using a specially designed two-degree-of-freedom platform; body sway was then induced and resulting compensatory responses at the ankle joints measured. Hybrid simulation compared the experimental results with models of the semicircular canals and utricular otolith receptors. Dynamic characteristics of the resulting canal model compared closely with characteristics of models which describe eye movement and subjective responses to body rotational motions. The average threshold level, in the postural response mode, however, was considerably lower. Analysis indicated that the otoliths probably play no role in the initial detection of body sway motion.

  4. A Model Describing Stable Coherent Synchrotron Radiation in Storage Rings

    NASA Astrophysics Data System (ADS)

    Sannibale, F.; Byrd, J. M.; Loftsdóttir, Á.; Venturini, M.; Abo-Bakr, M.; Feikes, J.; Holldack, K.; Kuske, P.; Wüstefeld, G.; Hübers, H.-W.; Warnock, R.

    2004-08-01

    We present a model describing high power stable broadband coherent synchrotron radiation (CSR) in the terahertz frequency region in an electron storage ring. The model includes distortion of bunch shape from the synchrotron radiation (SR), which enhances higher frequency coherent emission, and limits to stable emission due to an instability excited by the SR wakefield. It gives a quantitative explanation of several features of the recent observations of CSR at the BESSYII storage ring. We also use this model to optimize the performance of a source for stable CSR emission.

  5. Quantitative reactive modeling and verification.

    PubMed

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  6. 6 Principles for Quantitative Reasoning and Modeling

    ERIC Educational Resources Information Center

    Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye

    2014-01-01

    Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…

  7. Quantitative Modeling of Earth Surface Processes

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.

  8. More details...
  9. A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction

    ERIC Educational Resources Information Center

    Cohen, Anat; Nachmias, Rafi

    2006-01-01

    This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…

  10. Using concept maps to describe undergraduate students’ mental model in microbiology course

    NASA Astrophysics Data System (ADS)

    Hamdiyati, Y.; Sudargo, F.; Redjeki, S.; Fitriani, A.

    2018-05-01

    The purpose of this research was to describe students’ mental model in a mental model based-microbiology course using concept map as assessment tool. Respondents were 5th semester of undergraduate students of Biology Education of Universitas Pendidikan Indonesia. The mental modelling instrument used was concept maps. Data were taken on Bacteria sub subject. A concept map rubric was subsequently developed with a maximum score of 4. Quantitative data was converted into a qualitative one to determine mental model level, namely: emergent = score 1, transitional = score 2, close to extended = score 3, and extended = score 4. The results showed that mental model level on bacteria sub subject before the implementation of mental model based-microbiology course was at the transitional level. After implementation of mental model based-microbiology course, mental model was at transitional level, close to extended, and extended. This indicated an increase in the level of students’ mental model after the implementation of mental model based-microbiology course using concept map as assessment tool.

  11. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  12. Quantitative law describing market dynamics before and after interest-rate change.

    PubMed

    Petersen, Alexander M; Wang, Fengzhong; Havlin, Shlomo; Stanley, H Eugene

    2010-06-01

    We study the behavior of U.S. markets both before and after U.S. Federal Open Market Commission meetings and show that the announcement of a U.S. Federal Reserve rate change causes a financial shock, where the dynamics after the announcement is described by an analog of the Omori earthquake law. We quantify the rate n(t) of aftershocks following an interest-rate change at time T and find power-law decay which scales as n(t-T)∼(t-T)(-Ω) , with Ω positive. Surprisingly, we find that the same law describes the rate n'(|t-T|) of "preshocks" before the interest-rate change at time T . This study quantitatively relates the size of the market response to the news which caused the shock and uncovers the presence of quantifiable preshocks. We demonstrate that the news associated with interest-rate change is responsible for causing both the anticipation before the announcement and the surprise after the announcement. We estimate the magnitude of financial news using the relative difference between the U.S. Treasury Bill and the Federal Funds effective rate. Our results are consistent with the "sign effect," in which "bad news" has a larger impact than "good news." Furthermore, we observe significant volatility aftershocks, confirming a "market under-reaction" that lasts at least one trading day.

  13. Quantitative law describing market dynamics before and after interest-rate change

    NASA Astrophysics Data System (ADS)

    Petersen, Alexander M.; Wang, Fengzhong; Havlin, Shlomo; Stanley, H. Eugene

    2010-06-01

    We study the behavior of U.S. markets both before and after U.S. Federal Open Market Commission meetings and show that the announcement of a U.S. Federal Reserve rate change causes a financial shock, where the dynamics after the announcement is described by an analog of the Omori earthquake law. We quantify the rate n(t) of aftershocks following an interest-rate change at time T and find power-law decay which scales as n(t-T)˜(t-T)-Ω , with Ω positive. Surprisingly, we find that the same law describes the rate n'(|t-T|) of “preshocks” before the interest-rate change at time T . This study quantitatively relates the size of the market response to the news which caused the shock and uncovers the presence of quantifiable preshocks. We demonstrate that the news associated with interest-rate change is responsible for causing both the anticipation before the announcement and the surprise after the announcement. We estimate the magnitude of financial news using the relative difference between the U.S. Treasury Bill and the Federal Funds effective rate. Our results are consistent with the “sign effect,” in which “bad news” has a larger impact than “good news.” Furthermore, we observe significant volatility aftershocks, confirming a “market under-reaction” that lasts at least one trading day.

  14. Mathematical modelling and quantitative methods.

    PubMed

    Edler, L; Poirier, K; Dourson, M; Kleiner, J; Mileson, B; Nordmann, H; Renwick, A; Slob, W; Walton, K; Würtzen, G

    2002-01-01

    The present review reports on the mathematical methods and statistical techniques presently available for hazard characterisation. The state of the art of mathematical modelling and quantitative methods used currently for regulatory decision-making in Europe and additional potential methods for risk assessment of chemicals in food and diet are described. Existing practices of JECFA, FDA, EPA, etc., are examined for their similarities and differences. A framework is established for the development of new and improved quantitative methodologies. Areas for refinement, improvement and increase of efficiency of each method are identified in a gap analysis. Based on this critical evaluation, needs for future research are defined. It is concluded from our work that mathematical modelling of the dose-response relationship would improve the risk assessment process. An adequate characterisation of the dose-response relationship by mathematical modelling clearly requires the use of a sufficient number of dose groups to achieve a range of different response levels. This need not necessarily lead to an increase in the total number of animals in the study if an appropriate design is used. Chemical-specific data relating to the mode or mechanism of action and/or the toxicokinetics of the chemical should be used for dose-response characterisation whenever possible. It is concluded that a single method of hazard characterisation would not be suitable for all kinds of risk assessments, and that a range of different approaches is necessary so that the method used is the most appropriate for the data available and for the risk characterisation issue. Future refinements to dose-response characterisation should incorporate more clearly the extent of uncertainty and variability in the resulting output.

  15. Quantitative model for the blood pressure-lowering interaction of valsartan and amlodipine.

    PubMed

    Heo, Young-A; Holford, Nick; Kim, Yukyung; Son, Mijeong; Park, Kyungsoo

    2016-12-01

    The objective of this study was to develop a population pharmacokinetic (PK) and pharmacodynamic (PD) model to quantitatively describe the antihypertensive effect of combined therapy with amlodipine and valsartan. PK modelling was used with data collected from 48 healthy volunteers receiving a single dose of combined formulation of 10 mg amlodipine and 160 mg valsartan. Systolic (SBP) and diastolic blood pressure (DBP) were recorded during combined administration. SBP and DBP data for each drug alone were gathered from the literature. PKPD models of each drug and for combined administration were built with NONMEM 7.3. A two-compartment model with zero order absorption best described the PK data of both drugs. Amlodipine and valsartan monotherapy effects on SBP and DBP were best described by an I max model with an effect compartment delay. Combined therapy was described using a proportional interaction term as follows: (D 1  + D 2 ) +ALPHA×(D 1 × D 2 ). D 1 and D 2 are the predicted drug effects of amlodipine and valsartan monotherapy respectively. ALPHA is the interaction term for combined therapy. Quantitative estimates of ALPHA were -0.171 (95% CI: -0.218, -0.143) for SBP and -0.0312 (95% CI: -0.07739, -0.00283) for DBP. These infra-additive interaction terms for both SBP and DBP were consistent with literature results for combined administration of drugs in these classes. PKPD models for SBP and DBP successfully described the time course of the antihypertensive effects of amlodipine and valsartan. An infra-additive interaction between amlodipine and valsartan when used in combined administration was confirmed and quantified. © 2016 The British Pharmacological Society.

  16. Quantitative model for the blood pressure‐lowering interaction of valsartan and amlodipine

    PubMed Central

    Heo, Young‐A; Holford, Nick; Kim, Yukyung; Son, Mijeong

    2016-01-01

    Aims The objective of this study was to develop a population pharmacokinetic (PK) and pharmacodynamic (PD) model to quantitatively describe the antihypertensive effect of combined therapy with amlodipine and valsartan. Methods PK modelling was used with data collected from 48 healthy volunteers receiving a single dose of combined formulation of 10 mg amlodipine and 160 mg valsartan. Systolic (SBP) and diastolic blood pressure (DBP) were recorded during combined administration. SBP and DBP data for each drug alone were gathered from the literature. PKPD models of each drug and for combined administration were built with NONMEM 7.3. Results A two‐compartment model with zero order absorption best described the PK data of both drugs. Amlodipine and valsartan monotherapy effects on SBP and DBP were best described by an I max model with an effect compartment delay. Combined therapy was described using a proportional interaction term as follows: (D1 + D2) +ALPHA×(D1 × D2). D1 and D2 are the predicted drug effects of amlodipine and valsartan monotherapy respectively. ALPHA is the interaction term for combined therapy. Quantitative estimates of ALPHA were −0.171 (95% CI: −0.218, −0.143) for SBP and −0.0312 (95% CI: −0.07739, −0.00283) for DBP. These infra‐additive interaction terms for both SBP and DBP were consistent with literature results for combined administration of drugs in these classes. Conclusion PKPD models for SBP and DBP successfully described the time course of the antihypertensive effects of amlodipine and valsartan. An infra‐additive interaction between amlodipine and valsartan when used in combined administration was confirmed and quantified. PMID:27504853

  17. A basis for a visual language for describing, archiving and analyzing functional models of complex biological systems

    PubMed Central

    Cook, Daniel L; Farley, Joel F; Tapscott, Stephen J

    2001-01-01

    Background: We propose that a computerized, internet-based graphical description language for systems biology will be essential for describing, archiving and analyzing complex problems of biological function in health and disease. Results: We outline here a conceptual basis for designing such a language and describe BioD, a prototype language that we have used to explore the utility and feasibility of this approach to functional biology. Using example models, we demonstrate that a rather limited lexicon of icons and arrows suffices to describe complex cell-biological systems as discrete models that can be posted and linked on the internet. Conclusions: Given available computer and internet technology, BioD may be implemented as an extensible, multidisciplinary language that can be used to archive functional systems knowledge and be extended to support both qualitative and quantitative functional analysis. PMID:11305940

  18. Designing automation for human use: empirical studies and quantitative models.

    PubMed

    Parasuraman, R

    2000-07-01

    An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.

  19. The mathematics of cancer: integrating quantitative models.

    PubMed

    Altrock, Philipp M; Liu, Lin L; Michor, Franziska

    2015-12-01

    Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.

  20. Quantitative validation of an air-coupled ultrasonic probe model by Interferometric laser tomography

    NASA Astrophysics Data System (ADS)

    Revel, G. M.; Pandarese, G.; Cavuto, A.

    2012-06-01

    The present paper describes the quantitative validation of a finite element (FE) model of the ultrasound beam generated by an air coupled non-contact ultrasound transducer. The model boundary conditions are given by vibration velocities measured by laser vibrometry on the probe membrane. The proposed validation method is based on the comparison between the simulated 3D pressure field and the pressure data measured with interferometric laser tomography technique. The model details and the experimental techniques are described in paper. The analysis of results shows the effectiveness of the proposed approach and the possibility to quantitatively assess and predict the generated acoustic pressure field, with maximum discrepancies in the order of 20% due to uncertainty effects. This step is important for determining in complex problems the real applicability of air-coupled probes and for the simulation of the whole inspection procedure, also when the component is designed, so as to virtually verify its inspectability.

  21. Relevance and limitations of crowding, fractal, and polymer models to describe nuclear architecture.

    PubMed

    Huet, Sébastien; Lavelle, Christophe; Ranchon, Hubert; Carrivain, Pascal; Victor, Jean-Marc; Bancaud, Aurélien

    2014-01-01

    Chromosome architecture plays an essential role for all nuclear functions, and its physical description has attracted considerable interest over the last few years among the biophysics community. These researches at the frontiers of physics and biology have been stimulated by the demand for quantitative analysis of molecular biology experiments, which provide comprehensive data on chromosome folding, or of live cell imaging experiments that enable researchers to visualize selected chromosome loci in living or fixed cells. In this review our goal is to survey several nonmutually exclusive models that have emerged to describe the folding of DNA in the nucleus, the dynamics of proteins in the nucleoplasm, or the movements of chromosome loci. We focus on three classes of models, namely molecular crowding, fractal, and polymer models, draw comparisons, and discuss their merits and limitations in the context of chromosome structure and dynamics, or nuclear protein navigation in the nucleoplasm. Finally, we identify future challenges in the roadmap to a unified model of the nuclear environment. © 2014 Elsevier Inc. All rights reserved.

  1. A model describing diffusion in prostate cancer.

    PubMed

    Gilani, Nima; Malcolm, Paul; Johnson, Glyn

    2017-07-01

    Quantitative diffusion MRI has frequently been studied as a means of grading prostate cancer. Interpretation of results is complicated by the nature of prostate tissue, which consists of four distinct compartments: vascular, ductal lumen, epithelium, and stroma. Current diffusion measurements are an ill-defined weighted average of these compartments. In this study, prostate diffusion is analyzed in terms of a model that takes explicit account of tissue compartmentalization, exchange effects, and the non-Gaussian behavior of tissue diffusion. The model assumes that exchange between the cellular (ie, stromal plus epithelial) and the vascular and ductal compartments is slow. Ductal and cellular diffusion characteristics are estimated by Monte Carlo simulation and a two-compartment exchange model, respectively. Vascular pseudodiffusion is represented by an additional signal at b = 0. Most model parameters are obtained either from published data or by comparing model predictions with the published results from 41 studies. Model prediction error is estimated using 10-fold cross-validation. Agreement between model predictions and published results is good. The model satisfactorily explains the variability of ADC estimates found in the literature. A reliable model that predicts the diffusion behavior of benign and cancerous prostate tissue of different Gleason scores has been developed. Magn Reson Med 78:316-326, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  3. Integrating Quantitative and Ethnographic Methods to Describe the Classroom. Report No. 5083.

    ERIC Educational Resources Information Center

    Malitz, David; And Others

    The debate between proponents of ethnographic and quantitative methodology in classroom observation is reviewed, and the respective strengths and weaknesses of the two approaches are discussed. These methodologies are directly compared in a study that conducted simultaneous ethnographic and quantitative observations on nine classrooms. It is…

  4. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    PubMed

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-02

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series.

  5. Quantitative modeling of multiscale neural activity

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Rennie, Christopher J.

    2007-01-01

    The electrical activity of the brain has been observed for over a century and is widely used to probe brain function and disorders, chiefly through the electroencephalogram (EEG) recorded by electrodes on the scalp. However, the connections between physiology and EEGs have been chiefly qualitative until recently, and most uses of the EEG have been based on phenomenological correlations. A quantitative mean-field model of brain electrical activity is described that spans the range of physiological and anatomical scales from microscopic synapses to the whole brain. Its parameters measure quantities such as synaptic strengths, signal delays, cellular time constants, and neural ranges, and are all constrained by independent physiological measurements. Application of standard techniques from wave physics allows successful predictions to be made of a wide range of EEG phenomena, including time series and spectra, evoked responses to stimuli, dependence on arousal state, seizure dynamics, and relationships to functional magnetic resonance imaging (fMRI). Fitting to experimental data also enables physiological parameters to be infered, giving a new noninvasive window into brain function, especially when referenced to a standardized database of subjects. Modifications of the core model to treat mm-scale patchy interconnections in the visual cortex are also described, and it is shown that resulting waves obey the Schroedinger equation. This opens the possibility of classical cortical analogs of quantum phenomena.

  6. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    PubMed

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  7. Generalized PSF modeling for optimized quantitation in PET imaging.

    PubMed

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  8. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  9. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF.more » We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.« less

  10. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    PubMed Central

    Zhu, Hao; Sun, Yan; Rajagopal, Gunaretnam; Mondry, Adrian; Dhar, Pawan

    2004-01-01

    Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods described. PMID:15339335

  11. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  12. Quantitative model of super-Arrhenian behavior in glass forming materials

    NASA Astrophysics Data System (ADS)

    Caruthers, J. M.; Medvedev, G. A.

    2018-05-01

    The key feature of glass forming liquids is the super-Arrhenian temperature dependence of the mobility, where the mobility can increase by ten orders of magnitude or more as the temperature is decreased if crystallization does not intervene. A fundamental description of the super-Arrhenian behavior has been developed; specifically, the logarithm of the relaxation time is a linear function of 1 /U¯x , where U¯x is the independently determined excess molar internal energy and B is a material constant. This one-parameter mobility model quantitatively describes data for 21 glass forming materials, which are all the materials where there are sufficient experimental data for analysis. The effect of pressure on the loga mobility is also described using the same U¯x(T ,p ) function determined from the difference between the liquid and crystalline internal energies. It is also shown that B is well correlated with the heat of fusion. The prediction of the B /U¯x model is compared to the Adam and Gibbs 1 /T S¯x model, where the B /U¯x model is significantly better in unifying the full complement of mobility data. The implications of the B /U¯x model for the development of a fundamental description of glass are discussed.

  13. Downscaling SSPs in Bangladesh - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    NASA Astrophysics Data System (ADS)

    Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.

    2015-12-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  14. Describing dengue epidemics: Insights from simple mechanistic models

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2012-09-01

    We present a set of nested models to be applied to dengue fever epidemiology. We perform a qualitative study in order to show how much complexity we really need to add into epidemiological models to be able to describe the fluctuations observed in empirical dengue hemorrhagic fever incidence data offering a promising perspective on inference of parameter values from dengue case notifications.

  15. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    PubMed

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  16. Improving models for describing phosphorus cycling in agricultural soils

    USDA-ARS?s Scientific Manuscript database

    The mobility of phosphorus in the environment is controlled to a large extent by its sorption to soil. Therefore, an important component of all P loss models is how the model describes the biogeochemical processes governing P sorption and desorption to soils. The most common approach to modeling P c...

  17. Model-Based Linkage Analysis of a Quantitative Trait.

    PubMed

    Song, Yeunjoo E; Song, Sunah; Schnell, Audrey H

    2017-01-01

    Linkage Analysis is a family-based method of analysis to examine whether any typed genetic markers cosegregate with a given trait, in this case a quantitative trait. If linkage exists, this is taken as evidence in support of a genetic basis for the trait. Historically, linkage analysis was performed using a binary disease trait, but has been extended to include quantitative disease measures. Quantitative traits are desirable as they provide more information than binary traits. Linkage analysis can be performed using single-marker methods (one marker at a time) or multipoint (using multiple markers simultaneously). In model-based linkage analysis the genetic model for the trait of interest is specified. There are many software options for performing linkage analysis. Here, we use the program package Statistical Analysis for Genetic Epidemiology (S.A.G.E.). S.A.G.E. was chosen because it also includes programs to perform data cleaning procedures and to generate and test genetic models for a quantitative trait, in addition to performing linkage analysis. We demonstrate in detail the process of running the program LODLINK to perform single-marker analysis, and MLOD to perform multipoint analysis using output from SEGREG, where SEGREG was used to determine the best fitting statistical model for the trait.

  18. The linearized multistage model and the future of quantitative risk assessment.

    PubMed

    Crump, K S

    1996-10-01

    The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for

  19. Final Report for Dynamic Models for Causal Analysis of Panel Data. Models for Change in Quantitative Variables, Part II Scholastic Models. Part II, Chapter 4.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This document is part of a series of chapters described in SO 011 759. Stochastic models for the sociological analysis of change and the change process in quantitative variables are presented. The author lays groundwork for the statistical treatment of simple stochastic differential equations (SDEs) and discusses some of the continuities of…

  20. Querying quantitative logic models (Q2LM) to study intracellular signaling networks and cell-cytokine interactions.

    PubMed

    Morris, Melody K; Shriver, Zachary; Sasisekharan, Ram; Lauffenburger, Douglas A

    2012-03-01

    Mathematical models have substantially improved our ability to predict the response of a complex biological system to perturbation, but their use is typically limited by difficulties in specifying model topology and parameter values. Additionally, incorporating entities across different biological scales ranging from molecular to organismal in the same model is not trivial. Here, we present a framework called "querying quantitative logic models" (Q2LM) for building and asking questions of constrained fuzzy logic (cFL) models. cFL is a recently developed modeling formalism that uses logic gates to describe influences among entities, with transfer functions to describe quantitative dependencies. Q2LM does not rely on dedicated data to train the parameters of the transfer functions, and it permits straight-forward incorporation of entities at multiple biological scales. The Q2LM framework can be employed to ask questions such as: Which therapeutic perturbations accomplish a designated goal, and under what environmental conditions will these perturbations be effective? We demonstrate the utility of this framework for generating testable hypotheses in two examples: (i) a intracellular signaling network model; and (ii) a model for pharmacokinetics and pharmacodynamics of cell-cytokine interactions; in the latter, we validate hypotheses concerning molecular design of granulocyte colony stimulating factor. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A standard protocol for describing individual-based and agent-based models

    USGS Publications Warehouse

    Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.

    2006-01-01

    Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.

  2. Phosphate adsorption on hematite, kaolinite, and kaolinite-hematite (k-h) systems as described by a constant capacitance model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ioannou, A.; Dimirkou, A.

    1997-08-01

    The constant capacitance model was used to describe phosphate adsorption on hematite, kaolinite, and a kaolinite-hematite system (k-h). The model assumes a ligand exchange mechanism and considers the charge on both adsorbate and adsorbent. The model is shown to provide a quantitative description of phosphate adsorption on these, including the effect of varying pH values. The computer program Ma-Za 2, a program that fits equilibrium constants to experimental data using an optimization technique, was used to obtain optimal values for the anion surface complexation constants on hematite, kaolinite, and a kaolinite-hematite system, while the PC program Ma-Za 1 in Q-Basicmore » language was used for the application of the constant capacitance model. The model represented adsorption of phosphate anions well over the entire pH range studied (3.8--9.0). The main advantage of the model is its ability to represent changes in anion adsorption occurring with changes in pH. Extension of the model to describe phosphate adsorption in a mixed system, such as the kaolinite-hematite system, using the surface protonation-dissociation constant of hematite was qualitatively successful. In mixed system the model reproduced the shape of the adsorption isotherms well over the pH range 3.8--9.0. However, phosphate adsorption was overestimated. The hematite and the kaolinite-hematite system were synthesized and identified by X-ray, NMR, and FT-IR spectroscopy.« less

  3. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  4. Modeling with Young Students--Quantitative and Qualitative.

    ERIC Educational Resources Information Center

    Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis

    1999-01-01

    A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…

  5. HERMES: A Model to Describe Deformation, Burning, Explosion, and Detonation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reaugh, J E

    2011-11-22

    HERMES (High Explosive Response to MEchanical Stimulus) was developed to fill the need for a model to describe an explosive response of the type described as BVR (Burn to Violent Response) or HEVR (High Explosive Violent Response). Characteristically this response leaves a substantial amount of explosive unconsumed, the time to reaction is long, and the peak pressure developed is low. In contrast, detonations characteristically consume all explosive present, the time to reaction is short, and peak pressures are high. However, most of the previous models to describe explosive response were models for detonation. The earliest models to describe the responsemore » of explosives to mechanical stimulus in computer simulations were applied to intentional detonation (performance) of nearly ideal explosives. In this case, an ideal explosive is one with a vanishingly small reaction zone. A detonation is supersonic with respect to the undetonated explosive (reactant). The reactant cannot respond to the pressure of the detonation before the detonation front arrives, so the precise compressibility of the reactant does not matter. Further, the mesh sizes that were practical for the computer resources then available were large with respect to the reaction zone. As a result, methods then used to model detonations, known as {beta}-burn or program burn, were not intended to resolve the structure of the reaction zone. Instead, these methods spread the detonation front over a few finite-difference zones, in the same spirit that artificial viscosity is used to spread the shock front in inert materials over a few finite-difference zones. These methods are still widely used when the structure of the reaction zone and the build-up to detonation are unimportant. Later detonation models resolved the reaction zone. These models were applied both to performance, particularly as it is affected by the size of the charge, and to situations in which the stimulus was less than that needed for

  6. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models.

    PubMed

    Allen, R J; Rieger, T R; Musante, C J

    2016-03-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.

  7. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models

    PubMed Central

    Rieger, TR; Musante, CJ

    2016-01-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777

  8. Quantitative model validation of manipulative robot systems

    NASA Astrophysics Data System (ADS)

    Kartowisastro, Iman Herwidiana

    This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of

  9. On a Mathematical Model with Noncompact Boundary Conditions Describing Bacterial Population

    NASA Astrophysics Data System (ADS)

    Boulanouar, Mohamed

    2013-04-01

    In this work, we are concerned with the well-posedness of a mathematical model describing a maturation-velocity structured bacterial population. Each bacterium is distinguished by its degree of maturity and its maturation velocity. The bacterial mitosis is mathematically described by noncompact boundary conditions. We show that the mathematical model is governed by a positive strongly continuous semigroup.

  10. What Are We Doing When We Translate from Quantitative Models?

    PubMed Central

    Critchfield, Thomas S; Reed, Derek D

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may convey concepts that are difficult to capture in words. To support this point, we provide a nontechnical introduction to selected aspects of quantitative analysis; consider some issues that translational investigators (and, potentially, practitioners) confront when attempting to translate from quantitative models; and discuss examples of relevant translational studies. We conclude that, where behavior-science translation is concerned, the quantitative features of quantitative models cannot be ignored without sacrificing conceptual precision, scientific and practical insights, and the capacity of the basic and applied wings of behavior analysis to communicate effectively. PMID:22478533

  11. Downscaling SSPs in the GBM Delta - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    NASA Astrophysics Data System (ADS)

    Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila

    2016-04-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  12. A Kinetic Model Describing Injury-Burden in Team Sports.

    PubMed

    Fuller, Colin W

    2017-12-01

    Injuries in team sports are normally characterised by the incidence, severity, and location and type of injuries sustained: these measures, however, do not provide an insight into the variable injury-burden experienced during a season. Injury burden varies according to the team's match and training loads, the rate at which injuries are sustained and the time taken for these injuries to resolve. At the present time, this time-based variation of injury burden has not been modelled. To develop a kinetic model describing the time-based injury burden experienced by teams in elite team sports and to demonstrate the model's utility. Rates of injury were quantified using a large eight-season database of rugby injuries (5253) and exposure (60,085 player-match-hours) in English professional rugby. Rates of recovery from injury were quantified using time-to-recovery analysis of the injuries. The kinetic model proposed for predicting a team's time-based injury burden is based on a composite rate equation developed from the incidence of injury, a first-order rate of recovery from injury and the team's playing load. The utility of the model was demonstrated by examining common scenarios encountered in elite rugby. The kinetic model developed describes and predicts the variable injury-burden arising from match play during a season of rugby union based on the incidence of match injuries, the rate of recovery from injury and the playing load. The model is equally applicable to other team sports and other scenarios.

  13. Quantitative modeling of soil genesis processes

    NASA Technical Reports Server (NTRS)

    Levine, E. R.; Knox, R. G.; Kerber, A. G.

    1992-01-01

    For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.

  14. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  15. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  16. Testing process predictions of models of risky choice: a quantitative model comparison approach

    PubMed Central

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  17. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  18. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    NASA Astrophysics Data System (ADS)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  19. Species-free species distribution models describe macroecological properties of protected area networks.

    PubMed

    Robinson, Jason L; Fordyce, James A

    2017-01-01

    Among the greatest challenges facing the conservation of plants and animal species in protected areas are threats from a rapidly changing climate. An altered climate creates both challenges and opportunities for improving the management of protected areas in networks. Increasingly, quantitative tools like species distribution modeling are used to assess the performance of protected areas and predict potential responses to changing climates for groups of species, within a predictive framework. At larger geographic domains and scales, protected area network units have spatial geoclimatic properties that can be described in the gap analysis typically used to measure or aggregate the geographic distributions of species (stacked species distribution models, or S-SDM). We extend the use of species distribution modeling techniques in order to model the climate envelope (or "footprint") of individual protected areas within a network of protected areas distributed across the 48 conterminous United States and managed by the US National Park System. In our approach we treat each protected area as the geographic range of a hypothetical endemic species, then use MaxEnt and 5 uncorrelated BioClim variables to model the geographic distribution of the climatic envelope associated with each protected area unit (modeling the geographic area of park units as the range of a species). We describe the individual and aggregated climate envelopes predicted by a large network of 163 protected areas and briefly illustrate how macroecological measures of geodiversity can be derived from our analysis of the landscape ecological context of protected areas. To estimate trajectories of change in the temporal distribution of climatic features within a protected area network, we projected the climate envelopes of protected areas in current conditions onto a dataset of predicted future climatic conditions. Our results suggest that the climate envelopes of some parks may be locally unique or have

  20. Integrated Environmental Modeling: Quantitative Microbial Risk Assessment

    EPA Science Inventory

    The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...

  1. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  2. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  3. Quantitative genetics

    USDA-ARS?s Scientific Manuscript database

    The majority of economically important traits targeted for cotton improvement are quantitatively inherited. In this chapter, the current state of cotton quantitative genetics is described and separated into four components. These components include: 1) traditional quantitative inheritance analysis, ...

  4. A quantitative speciation model for the adsorption of organic pollutants on activated carbon.

    PubMed

    Grivé, M; García, D; Domènech, C; Richard, L; Rojo, I; Martínez, X; Rovira, M

    2013-01-01

    Granular activated carbon (GAC) is commonly used as adsorbent in water treatment plants given its high capacity for retaining organic pollutants in aqueous phase. The current knowledge on GAC behaviour is essentially empirical, and no quantitative description of the chemical relationships between GAC surface groups and pollutants has been proposed. In this paper, we describe a quantitative model for the adsorption of atrazine onto GAC surface. The model is based on results of potentiometric titrations and three types of adsorption experiments which have been carried out in order to determine the nature and distribution of the functional groups on the GAC surface, and evaluate the adsorption characteristics of GAC towards atrazine. Potentiometric titrations have indicated the existence of at least two different families of chemical groups on the GAC surface, including phenolic- and benzoic-type surface groups. Adsorption experiments with atrazine have been satisfactorily modelled with the geochemical code PhreeqC, assuming that atrazine is sorbed onto the GAC surface in equilibrium (log Ks = 5.1 ± 0.5). Independent thermodynamic calculations suggest a possible adsorption of atrazine on a benzoic derivative. The present work opens a new approach for improving the adsorption capabilities of GAC towards organic pollutants by modifying its chemical properties.

  5. Refining the quantitative pathway of the Pathways to Mathematics model.

    PubMed

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a resultmore » of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.« less

  7. Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology

    EPA Science Inventory

    A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...

  8. Advances in Geoscience Modeling: Smart Modeling Frameworks, Self-Describing Models and the Role of Standardized Metadata

    NASA Astrophysics Data System (ADS)

    Peckham, Scott

    2016-04-01

    Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders

  9. Quantitative metrics that describe river deltas and their channel networks

    NASA Astrophysics Data System (ADS)

    Edmonds, Douglas A.; Paola, Chris; Hoyal, David C. J. D.; Sheets, Ben A.

    2011-12-01

    Densely populated river deltas are losing land at an alarming rate and to successfully restore these environments we must understand the details of their morphology. Toward this end we present a set of five metrics that describe delta morphology: (1) the fractal dimension, (2) the distribution of island sizes, (3) the nearest-edge distance, (4) a synthetic distribution of sediment fluxes at the shoreline, and (5) the nourishment area. The nearest-edge distance is the shortest distance to channelized or unchannelized water from a given location on the delta and is analogous to the inverse of drainage density in tributary networks. The nourishment area is the downstream delta area supplied by the sediment coming through a given channel cross section and is analogous to catchment area in tributary networks. As a first step, we apply these metrics to four relatively simple, fluvially dominated delta networks. For all these deltas, the average nearest-edge distances are remarkably constant moving down delta suggesting that the network organizes itself to maintain a consistent distance to the nearest channel. Nourishment area distributions can be predicted from a river mouth bar model of delta growth, and also scale with the width of the channel and with the length of the longest channel, analogous to Hack's law for drainage basins. The four delta channel networks are fractal, but power laws and scale invariance appear to be less pervasive than in tributary networks. Thus, deltas may occupy an advantageous middle ground between complete similarity and complete dissimilarity, where morphologic differences indicate different behavior.

  10. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    PubMed

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. A functional-structural model of rice linking quantitative genetic information with morphological development and physiological processes.

    PubMed

    Xu, Lifeng; Henke, Michael; Zhu, Jun; Kurth, Winfried; Buck-Sorlin, Gerhard

    2011-04-01

    Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype-phenotype model, we present here a three-dimensional functional-structural plant model (FSPM) of rice, in which some model parameters are controlled by functions describing the effect of main-effect and epistatic QTLs. The model simulates the growth and development of rice based on selected ecophysiological processes, such as photosynthesis (source process) and organ formation, growth and extension (sink processes). It was devised using GroIMP, an interactive modelling platform based on the Relational Growth Grammar formalism (RGG). RGG rules describe the course of organ initiation and extension resulting in final morphology. The link between the phenotype (as represented by the simulated rice plant) and the QTL genotype was implemented via a data interface between the rice FSPM and the QTLNetwork software, which computes predictions of QTLs from map data and measured trait data. Using plant height and grain yield, it is shown how QTL information for a given trait can be used in an FSPM, computing and visualizing the phenotypes of different lines of a mapping population. Furthermore, we demonstrate how modification of a particular trait feeds back on the entire plant phenotype via the physiological processes considered. We linked a rice FSPM to a quantitative genetic model, thereby employing QTL information to refine model parameters and visualizing the dynamics of development of the entire phenotype as a result of ecophysiological processes, including the trait(s) for which genetic information is available. Possibilities for further extension of the model, for example for the purposes of ideotype breeding, are discussed.

  12. A functional–structural model of rice linking quantitative genetic information with morphological development and physiological processes

    PubMed Central

    Xu, Lifeng; Henke, Michael; Zhu, Jun; Kurth, Winfried; Buck-Sorlin, Gerhard

    2011-01-01

    Background and Aims Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype–phenotype model, we present here a three-dimensional functional–structural plant model (FSPM) of rice, in which some model parameters are controlled by functions describing the effect of main-effect and epistatic QTLs. Methods The model simulates the growth and development of rice based on selected ecophysiological processes, such as photosynthesis (source process) and organ formation, growth and extension (sink processes). It was devised using GroIMP, an interactive modelling platform based on the Relational Growth Grammar formalism (RGG). RGG rules describe the course of organ initiation and extension resulting in final morphology. The link between the phenotype (as represented by the simulated rice plant) and the QTL genotype was implemented via a data interface between the rice FSPM and the QTLNetwork software, which computes predictions of QTLs from map data and measured trait data. Key Results Using plant height and grain yield, it is shown how QTL information for a given trait can be used in an FSPM, computing and visualizing the phenotypes of different lines of a mapping population. Furthermore, we demonstrate how modification of a particular trait feeds back on the entire plant phenotype via the physiological processes considered. Conclusions We linked a rice FSPM to a quantitative genetic model, thereby employing QTL information to refine model parameters and visualizing the dynamics of development of the entire phenotype as a result of ecophysiological processes, including the trait(s) for which genetic information is available. Possibilities for further extension of the model, for example for the purposes of ideotype breeding, are discussed. PMID:21247905

  13. Evaluation of a cross contamination model describing transfer of Salmonella spp. and Listeria monocytogenes during grinding of pork and beef.

    PubMed

    Møller, C O A; Sant'Ana, A S; Hansen, S K H; Nauta, M J; Silva, L P; Alvarenga, V O; Maffei, D; Silva, F F P; Lopes, J T; Franco, B D G M; Aabo, S; Hansen, T B

    2016-06-02

    In a previous study, a model was developed to describe the transfer and survival of Salmonella during grinding of pork (Møller, C.O.A., Nauta, M.J., Christensen, B.B., Dalgaard, P., Hansen, T.B., 2012. Modelling transfer of Salmonella typhimurium DT104 during simulation of grinding of pork. Journal of Applied Microbiology 112 (1), 90-98). The robustness of this model is now evaluated by studying its performance for predicting the transfer and survival of Salmonella spp. and Listeria monocytogenes during grinding of different types of meat (pork and beef), using two different grinders, different sizes and different numbers of pieces of meats to be ground. A total of 19 grinding trials were collected. Acceptable Simulation Zone (ASZ), visual inspection of the data, Quantitative Microbiological Risk Assessment (QMRA), as well as the Total Transfer Potential (TTP) were used as approaches to evaluate model performance and to access the quality of the cross contamination model predictions. Using the ASZ approach and considering that 70% of the observed counts have to be inside a defined acceptable zone of ±0.5 log10CFU per portion, it was found that the cross contamination parameters suggested by Møller et al. (2012) were not able to describe all 19 trials. However, for each of the collected grinding trials, the transfer event was well described when fitted to the model structure proposed by Møller et al. (2012). Parameter estimates obtained by fitting observed trials performed at different conditions, such as size and number of pieces of meat to be ground, may not be applied to describe cross contamination of unlike processing. Nevertheless, the risk estimates, as well as the TTP, revealed that the risk of disease may be reduced when the grinding of meat is performed in a grinder made of stainless steel (for all surfaces in contact with the meat), using a well-sharpened knife and holding at room temperatures lower than 4°C. Copyright © 2016 Elsevier B.V. All

  14. Quantitative comparison between crowd models for evacuation planning and evaluation

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.

    2014-02-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.

  15. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    PubMed

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. A modeling approach to describe ZVI-based anaerobic system.

    PubMed

    Xiao, Xiao; Sheng, Guo-Ping; Mu, Yang; Yu, Han-Qing

    2013-10-15

    Zero-valent iron (ZVI) is increasingly being added into anaerobic reactors to enhance the biological conversion of various less biodegradable pollutants (LBPs). Our study aimed to establish a new structure model based on the Anaerobic Digestion Model No. 1 (ADM1) to simulate such a ZVI-based anaerobic reactor. Three new processes, i.e., electron release from ZVI corrosion, H2 formation from ZVI corrosion, and transformation of LBPs, were integrated into ADM1. The established model was calibrated and tested using the experimental data from one published study, and validated using the data from another work. A good relationship between the predicted and measured results indicates that the proposed model was appropriate to describe the performance of the ZVI-based anaerobic system. Our model could provide more precise strategies for the design, development, and application of anaerobic systems especially for treating various LBPs-containing wastewaters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Chromatography modelling to describe protein adsorption at bead level.

    PubMed

    Gerontas, Spyridon; Shapiro, Michael S; Bracewell, Daniel G

    2013-04-05

    Chromatographic modelling can be used to describe and further understand the behaviour of biological species during their chromatography separation on adsorption resins. Current modelling approaches assume uniform rate parameters throughout the column. Software and hardware advances now allow us to consider what can be learnt from modelling at bead level, enabling simulation of heterogeneity in bead and packed bed structure due to design or due to changes during operation. In this paper, a model has been developed to simulate at bead level protein loading in 1.5 μl microfluidic columns. This model takes into account the heterogeneity in bead sizes and the spatial variations of the characteristics of a packed bed, such as bed void fraction and dispersion, thus offering a detailed description of the flow field and mass transfer phenomena. Simulations were shown to be in good agreement with published experimental data. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. A quantitative dynamic systems model of health-related quality of life among older adults

    PubMed Central

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  19. A Model for Describing, Analysing and Investigating Cultural Understanding in EFL Reading Settings

    ERIC Educational Resources Information Center

    Porto, Melina

    2013-01-01

    This article describes a model used to explore cultural understanding in English as a foreign language reading in a developing country, namely Argentina. The model is designed to investigate, analyse and describe EFL readers' processes of cultural understanding in a specific context. Cultural understanding in reading is typically investigated…

  20. A model to describe potential effects of chemotherapy on critical radiobiological treatments

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, D.; Desco, M. M.; Antoranz, J. C.

    2016-08-01

    Although chemo- and radiotherapy can annihilate tumors on their own. they are also used in coadjuvancy: improving local effects of radiotherapy using chemotherapy as a radiosensit.izer. The effects of radiotherapy are well described by current radiobiological models. The goal of this work is to describe a discrete radiotherapy model, that has been previously used describe high radiation dose response as well as unusual radio-responses of some types of tumors (e.g. prostate cancer), to obtain a model of chemo+radiotherapy that can describe how the outcome of their combination is a more efficient removal of the tumor. Our hypothesis is that, although both treatments haven different mechanisms, both affect similar key points of cell metabolism and regulation, that lead to cellular death. Hence, we will consider a discrete model where chemotherapy may affect a fraction of the same targets destroyed by radiotherapy. Although radiotherapy reaches all cells equally, chemotherapy diffuses through a tumor attaining lower concentration in its center and higher in its surface. With our simulations we study the enhanced effect of combined therapy treatment and how it depends on the tissue critical parameters (the parameters of the lion-extensive radiobiological model), the number of “targets” aimed at by chemotherapy, and the concentration and diffusion rate of the drug inside the tumor. The results show that an equivalent, cliemo-radio-dose can be computed that allows the prediction of the lower radiation dose that causes the same effect than a radio-only treatment.

  1. Monitoring with Trackers Based on Semi-Quantitative Models

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1997-01-01

    In three years of NASA-sponsored research preceding this project, we successfully developed a technology for: (1) building qualitative and semi-quantitative models from libraries of model-fragments, (2) simulating these models to predict future behaviors with the guarantee that all possible behaviors are covered, (3) assimilating observations into behaviors, shrinking uncertainty so that incorrect models are eventually refuted and correct models make stronger predictions for the future. In our object-oriented framework, a tracker is an object which embodies the hypothesis that the available observation stream is consistent with a particular behavior of a particular model. The tracker maintains its own status (consistent, superceded, or refuted), and answers questions about its explanation for past observations and its predictions for the future. In the MIMIC approach to monitoring of continuous systems, a number of trackers are active in parallel, representing alternate hypotheses about the behavior of a system. This approach is motivated by the need to avoid 'system accidents' [Perrow, 1985] due to operator fixation on a single hypothesis, as for example at Three Mile Island. As we began to address these issues, we focused on three major research directions that we planned to pursue over a three-year project: (1) tractable qualitative simulation, (2) semiquantitative inference, and (3) tracking set management. Unfortunately, funding limitations made it impossible to continue past year one. Nonetheless, we made major progress in the first two of these areas. Progress in the third area as slower because the graduate student working on that aspect of the project decided to leave school and take a job in industry. I enclosed a set of abstract of selected papers on the work describe below. Several papers that draw on the research supported during this period appeared in print after the grant period ended.

  2. [Quantitative risk model for verocytotoxigenic Escherichia coli cross-contamination during homemade hamburger preparation].

    PubMed

    Signorini, M L; Frizzo, L S

    2009-01-01

    The objective of this study was to develop a quantitative risk model for verocytotoxigenic Escherichia coil (VTEC) cross-contamination during hamburger preparation at home. Published scientific information about the disease was considered for the elaboration of the model, which included a number of routines performed during food preparation in kitchens. The associated probabilities of bacterial transference between food items and kitchen utensils which best described each stage of the process were incorporated into the model by using @Risk software. Handling raw meat before preparing ready-to-eat foods (Odds ratio, OR, 6.57), as well as hand (OR = 12.02) and cutting board (OR = 5.02) washing habits were the major risk factors of VTEC cross-contamination from meat to vegetables. The information provided by this model should be considered when designing public information campaigns on hemolytic uremic syndrome risk directed to food handlers, in order to stress the importance of the above mentioned factors in disease transmission.

  3. Extended nonlinear feedback model for describing episodes of high inflation

    NASA Astrophysics Data System (ADS)

    Szybisz, Martín A.; Szybisz, Leszek

    2017-01-01

    An extension of the nonlinear feedback (NLF) formalism to describe regimes of hyper- and high-inflation in economy is proposed in the present work. In the NLF model the consumer price index (CPI) exhibits a finite time singularity of the type 1 /(tc - t) (1 - β) / β, with β > 0, predicting a blow up of the economy at a critical time tc. However, this model fails in determining tc in the case of weak hyperinflation regimes like, e.g., that occurred in Israel. To overcome this trouble, the NLF model is extended by introducing a parameter γ, which multiplies all terms with past growth rate index (GRI). In this novel approach the solution for CPI is also analytic being proportional to the Gaussian hypergeometric function 2F1(1 / β , 1 / β , 1 + 1 / β ; z) , where z is a function of β, γ, and tc. For z → 1 this hypergeometric function diverges leading to a finite time singularity, from which a value of tc can be determined. This singularity is also present in GRI. It is shown that the interplay between parameters β and γ may produce phenomena of multiple equilibria. An analysis of the severe hyperinflation occurred in Hungary proves that the novel model is robust. When this model is used for examining data of Israel a reasonable tc is got. High-inflation regimes in Mexico and Iceland, which exhibit weaker inflations than that of Israel, are also successfully described.

  4. Overcoming pain thresholds with multilevel models-an example using quantitative sensory testing (QST) data.

    PubMed

    Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris

    2015-01-01

    The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.

  5. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  6. A physical model describing the interaction of nuclear transport receptors with FG nucleoporin domain assemblies.

    PubMed

    Zahn, Raphael; Osmanović, Dino; Ehret, Severin; Araya Callis, Carolina; Frey, Steffen; Stewart, Murray; You, Changjiang; Görlich, Dirk; Hoogenboom, Bart W; Richter, Ralf P

    2016-04-08

    The permeability barrier of nuclear pore complexes (NPCs) controls bulk nucleocytoplasmic exchange. It consists of nucleoporin domains rich in phenylalanine-glycine motifs (FG domains). As a bottom-up nanoscale model for the permeability barrier, we have used planar films produced with three different end-grafted FG domains, and quantitatively analyzed the binding of two different nuclear transport receptors (NTRs), NTF2 and Importin β, together with the concomitant film thickness changes. NTR binding caused only moderate changes in film thickness; the binding isotherms showed negative cooperativity and could all be mapped onto a single master curve. This universal NTR binding behavior - a key element for the transport selectivity of the NPC - was quantitatively reproduced by a physical model that treats FG domains as regular, flexible polymers, and NTRs as spherical colloids with a homogeneous surface, ignoring the detailed arrangement of interaction sites along FG domains and on the NTR surface.

  7. Four-point bending as a method for quantitatively evaluating spinal arthrodesis in a rat model.

    PubMed

    Robinson, Samuel T; Svet, Mark T; Kanim, Linda A; Metzger, Melodie F

    2015-02-01

    The most common method of evaluating the success (or failure) of rat spinal fusion procedures is manual palpation testing. Whereas manual palpation provides only a subjective binary answer (fused or not fused) regarding the success of a fusion surgery, mechanical testing can provide more quantitative data by assessing variations in strength among treatment groups. We here describe a mechanical testing method to quantitatively assess single-level spinal fusion in a rat model, to improve on the binary and subjective nature of manual palpation as an end point for fusion-related studies. We tested explanted lumbar segments from Sprague-Dawley rat spines after single-level posterolateral fusion procedures at L4-L5. Segments were classified as 'not fused,' 'restricted motion,' or 'fused' by using manual palpation testing. After thorough dissection and potting of the spine, 4-point bending in flexion then was applied to the L4-L5 motion segment, and stiffness was measured as the slope of the moment-displacement curve. Results demonstrated statistically significant differences in stiffness among all groups, which were consistent with preliminary grading according to manual palpation. In addition, the 4-point bending results provided quantitative information regarding the quality of the bony union formed and therefore enabled the comparison of fused specimens. Our results demonstrate that 4-point bending is a simple, reliable, and effective way to describe and compare results among rat spines after fusion surgery.

  8. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015

    PubMed Central

    Sobkowicz, Pawel

    2016-01-01

    We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O) approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions)—which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be ‘invaded’ by a newcomer third party very quickly, while the second remains immune to such invasion. PMID:27171226

  9. Can the three pore model correctly describe peritoneal transport of protein?

    PubMed

    Waniewski, Jacek; Poleszczuk, Jan; Antosiewicz, Stefan; Baczynński, Daniel; Gałach, Magda; Pietribiasi, Mauro; Wanńkowicz, Zofia

    2014-01-01

    The three pore model (3PM) includes large pores for the description of protein leak to the peritoneal cavity during peritoneal dialysis. However, the reliability of this description has been not fully tested against clinical data yet. Peritoneal transport parameters were estimated using 3PM, extended 3p model (with estimation of fraction of large pores, ext3PM), ext3PM with modified size of pores and proteins (mext3PM), and simplified two pore (2PM, small and ultrasmall pores) models for 32 patients on peritoneal dialysis investigated using the sequential peritoneal equilibration test (consecutive peritoneal equilibration test [PET]: glucose 2.27%, 4 h, and miniPET: glucose 3.86%, 1 h). Urea, creatinine, glucose, sodium, phosphate, albumin, and IgM concentrations were measured in dialysis fluid and plasma. Ext3PM and mext3PM, with large pore fraction of about 0.14, provided a good description of fluid and small solute kinetics, but their predictions for albumin transport were less accurate. Two pore model precisely described the data on fluid and small solute transport. The 3p models could not describe the diffusive-convective transport of albumin as precisely as the transport of fluid, small solutes, and IgM. The 2p model (not applicable for proteins) was an efficient tool for modeling fluid and small solute transport.

  10. Quantitative measures for redox signaling.

    PubMed

    Pillay, Ché S; Eagling, Beatrice D; Driscoll, Scott R E; Rohwer, Johann M

    2016-07-01

    Redox signaling is now recognized as an important regulatory mechanism for a number of cellular processes including the antioxidant response, phosphokinase signal transduction and redox metabolism. While there has been considerable progress in identifying the cellular machinery involved in redox signaling, quantitative measures of redox signals have been lacking, limiting efforts aimed at understanding and comparing redox signaling under normoxic and pathogenic conditions. Here we have outlined some of the accepted principles for redox signaling, including the description of hydrogen peroxide as a signaling molecule and the role of kinetics in conferring specificity to these signaling events. Based on these principles, we then develop a working definition for redox signaling and review a number of quantitative methods that have been employed to describe signaling in other systems. Using computational modeling and published data, we show how time- and concentration- dependent analyses, in particular, could be used to quantitatively describe redox signaling and therefore provide important insights into the functional organization of redox networks. Finally, we consider some of the key challenges with implementing these methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Quantitative trait nucleotide analysis using Bayesian model selection.

    PubMed

    Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D

    2005-10-01

    Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.

  12. Quantitative Prediction of Drug–Drug Interactions Involving Inhibitory Metabolites in Drug Development: How Can Physiologically Based Pharmacokinetic Modeling Help?

    PubMed Central

    Chen, Y; Mao, J; Lin, J; Yu, H; Peters, S; Shebley, M

    2016-01-01

    This subteam under the Drug Metabolism Leadership Group (Innovation and Quality Consortium) investigated the quantitative role of circulating inhibitory metabolites in drug–drug interactions using physiologically based pharmacokinetic (PBPK) modeling. Three drugs with major circulating inhibitory metabolites (amiodarone, gemfibrozil, and sertraline) were systematically evaluated in addition to the literature review of recent examples. The application of PBPK modeling in drug interactions by inhibitory parent–metabolite pairs is described and guidance on strategic application is provided. PMID:27642087

  13. Building Quantitative Hydrologic Storylines from Process-based Models for Managing Water Resources in the U.S. Under Climate-changed Futures

    NASA Astrophysics Data System (ADS)

    Arnold, J.; Gutmann, E. D.; Clark, M. P.; Nijssen, B.; Vano, J. A.; Addor, N.; Wood, A.; Newman, A. J.; Mizukami, N.; Brekke, L. D.; Rasmussen, R.; Mendoza, P. A.

    2016-12-01

    Climate change narratives for water-resource applications must represent the change signals contextualized by hydroclimatic process variability and uncertainty at multiple scales. Building narratives of plausible change includes assessing uncertainties across GCM structure, internal climate variability, climate downscaling methods, and hydrologic models. Work with this linked modeling chain has dealt mostly with GCM sampling directed separately to either model fidelity (does the model correctly reproduce the physical processes in the world?) or sensitivity (of different model responses to CO2 forcings) or diversity (of model type, structure, and complexity). This leaves unaddressed any interactions among those measures and with other components in the modeling chain used to identify water-resource vulnerabilities to specific climate threats. However, time-sensitive, real-world vulnerability studies typically cannot accommodate a full uncertainty ensemble across the whole modeling chain, so a gap has opened between current scientific knowledge and most routine applications for climate-changed hydrology. To close that gap, the US Army Corps of Engineers, the Bureau of Reclamation, and the National Center for Atmospheric Research are working on techniques to subsample uncertainties objectively across modeling chain components and to integrate results into quantitative hydrologic storylines of climate-changed futures. Importantly, these quantitative storylines are not drawn from a small sample of models or components. Rather, they stem from the more comprehensive characterization of the full uncertainty space for each component. Equally important from the perspective of water-resource practitioners, these quantitative hydrologic storylines are anchored in actual design and operations decisions potentially affected by climate change. This talk will describe part of our work characterizing variability and uncertainty across modeling chain components and their

  14. Final Report for Dynamic Models for Causal Analysis of Panel Data. Models for Change in Quantitative Variables, Part III: Estimation from Panel Data. Part II, Chapter 5.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This document is part of a series of chapters described in SO 011 759. Addressing the problems of studying change and the change process, the report argues that sociologists should study coupled changes in qualitative and quantitative outcomes (e.g., marital status and earnings). The author presents a model for sociological studies of change in…

  15. The remarkable ability of turbulence model equations to describe transition

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1992-01-01

    This paper demonstrates how well the k-omega turbulence model describes the nonlinear growth of flow instabilities from laminar flow into the turbulent flow regime. Viscous modifications are proposed for the k-omega model that yield close agreement with measurements and with Direct Numerical Simulation results for channel and pipe flow. These modifications permit prediction of subtle sublayer details such as maximum dissipation at the surface, k approximately y(exp 2) as y approaches 0, and the sharp peak value of k near the surface. With two transition specific closure coefficients, the model equations accurately predict transition for an incompressible flat-plate boundary layer. The analysis also shows why the k-epsilon model is so difficult to use for predicting transition.

  16. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    PubMed

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  17. A physical model describing the interaction of nuclear transport receptors with FG nucleoporin domain assemblies

    PubMed Central

    Zahn, Raphael; Osmanović, Dino; Ehret, Severin; Araya Callis, Carolina; Frey, Steffen; Stewart, Murray; You, Changjiang; Görlich, Dirk; Hoogenboom, Bart W; Richter, Ralf P

    2016-01-01

    The permeability barrier of nuclear pore complexes (NPCs) controls bulk nucleocytoplasmic exchange. It consists of nucleoporin domains rich in phenylalanine-glycine motifs (FG domains). As a bottom-up nanoscale model for the permeability barrier, we have used planar films produced with three different end-grafted FG domains, and quantitatively analyzed the binding of two different nuclear transport receptors (NTRs), NTF2 and Importin β, together with the concomitant film thickness changes. NTR binding caused only moderate changes in film thickness; the binding isotherms showed negative cooperativity and could all be mapped onto a single master curve. This universal NTR binding behavior – a key element for the transport selectivity of the NPC – was quantitatively reproduced by a physical model that treats FG domains as regular, flexible polymers, and NTRs as spherical colloids with a homogeneous surface, ignoring the detailed arrangement of interaction sites along FG domains and on the NTR surface. DOI: http://dx.doi.org/10.7554/eLife.14119.001 PMID:27058170

  18. Photometric model of diffuse surfaces described as a distribution of interfaced Lambertian facets.

    PubMed

    Simonot, Lionel

    2009-10-20

    The Lambertian model for diffuse reflection is widely used for the sake of its simplicity. Nevertheless, this model is known to be inaccurate in describing a lot of real-world objects, including those that present a matte surface. To overcome this difficulty, we propose a photometric model where the surfaces are described as a distribution of facets where each facet consists of a flat interface on a Lambertian background. Compared to the Lambertian model, it includes two additional physical parameters: an interface roughness parameter and the ratio between the refractive indices of the background binder and of the upper medium. The Torrance-Sparrow model--distribution of strictly specular facets--and the Oren-Nayar model--distribution of strictly Lambertian facets--appear as special cases.

  19. Application of a Mathematical Model to Describe the Effects of Chlorpyrifos on Caenorhabditis elegans Development

    PubMed Central

    Boyd, Windy A.; Smith, Marjolein V.; Kissling, Grace E.; Rice, Julie R.; Snyder, Daniel W.; Portier, Christopher J.; Freedman, Jonathan H.

    2009-01-01

    Background The nematode Caenorhabditis elegans is being assessed as an alternative model organism as part of an interagency effort to develop better means to test potentially toxic substances. As part of this effort, assays that use the COPAS Biosort flow sorting technology to record optical measurements (time of flight (TOF) and extinction (EXT)) of individual nematodes under various chemical exposure conditions are being developed. A mathematical model has been created that uses Biosort data to quantitatively and qualitatively describe C. elegans growth, and link changes in growth rates to biological events. Chlorpyrifos, an organophosphate pesticide known to cause developmental delays and malformations in mammals, was used as a model toxicant to test the applicability of the growth model for in vivo toxicological testing. Methodology/Principal Findings L1 larval nematodes were exposed to a range of sub-lethal chlorpyrifos concentrations (0–75 µM) and measured every 12 h. In the absence of toxicant, C. elegans matured from L1s to gravid adults by 60 h. A mathematical model was used to estimate nematode size distributions at various times. Mathematical modeling of the distributions allowed the number of measured nematodes and log(EXT) and log(TOF) growth rates to be estimated. The model revealed three distinct growth phases. The points at which estimated growth rates changed (change points) were constant across the ten chlorpyrifos concentrations. Concentration response curves with respect to several model-estimated quantities (numbers of measured nematodes, mean log(TOF) and log(EXT), growth rates, and time to reach change points) showed a significant decrease in C. elegans growth with increasing chlorpyrifos concentration. Conclusions Effects of chlorpyrifos on C. elegans growth and development were mathematically modeled. Statistical tests confirmed a significant concentration effect on several model endpoints. This confirmed that chlorpyrifos affects C

  20. Computer modeling describes gravity-related adaptation in cell cultures.

    PubMed

    Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny

    2009-12-16

    Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.

  1. A hybrid model describing ion induced kinetic electron emission

    NASA Astrophysics Data System (ADS)

    Hanke, S.; Duvenbeck, A.; Heuser, C.; Weidtmann, B.; Wucher, A.

    2015-06-01

    We present a model to describe the kinetic internal and external electron emission from an ion bombarded metal target. The model is based upon a molecular dynamics treatment of the nuclear degree of freedom, the electronic system is assumed as a quasi-free electron gas characterized by its Fermi energy, electron temperature and a characteristic attenuation length. In a series of previous works we have employed this model, which includes the local kinetic excitation as well as the rapid spread of the generated excitation energy, in order to calculate internal and external electron emission yields within the framework of a Richardson-Dushman-like thermionic emission model. However, this kind of treatment turned out to fail in the realistic prediction of experimentally measured internal electron yields mainly due to the restriction of the treatment of electronic transport to a diffusive manner. Here, we propose a slightly modified approach additionally incorporating the contribution of hot electrons which are generated in the bulk material and undergo ballistic transport towards the emitting interface.

  2. Performance Theories for Sentence Coding: Some Quantitative Models

    ERIC Educational Resources Information Center

    Aaronson, Doris; And Others

    1977-01-01

    This study deals with the patterns of word-by-word reading times over a sentence when the subject must code the linguistic information sufficiently for immediate verbatim recall. A class of quantitative models is considered that would account for reading times at phrase breaks. (Author/RM)

  3. Quantitative research.

    PubMed

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  4. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    PubMed

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34

  5. Describing Myxococcus xanthus Aggregation Using Ostwald Ripening Equations for Thin Liquid Films

    PubMed Central

    Bahar, Fatmagül; Pratt-Szeliga, Philip C.; Angus, Stuart; Guo, Jiaye; Welch, Roy D.

    2014-01-01

    When starved, a swarm of millions of Myxococcus xanthus cells coordinate their movement from outward swarming to inward coalescence. The cells then execute a synchronous program of multicellular development, arranging themselves into dome shaped aggregates. Over the course of development, about half of the initial aggregates disappear, while others persist and mature into fruiting bodies. This work seeks to develop a quantitative model for aggregation that accurately simulates which will disappear and which will persist. We analyzed time-lapse movies of M. xanthus development, modeled aggregation using the equations that describe Ostwald ripening of droplets in thin liquid films, and predicted the disappearance and persistence of aggregates with an average accuracy of 85%. We then experimentally validated a prediction that is fundamental to this model by tracking individual fluorescent cells as they moved between aggregates and demonstrating that cell movement towards and away from aggregates correlates with aggregate disappearance. Describing development through this model may limit the number and type of molecular genetic signals needed to complete M. xanthus development, and it provides numerous additional testable predictions. PMID:25231319

  6. Diffusion model to describe osteogenesis within a porous titanium scaffold.

    PubMed

    Schmitt, M; Allena, R; Schouman, T; Frasca, S; Collombet, J M; Holy, X; Rouch, P

    2016-01-01

    In this study, we develop a two-dimensional finite element model, which is derived from an animal experiment and allows simulating osteogenesis within a porous titanium scaffold implanted in ewe's hemi-mandible during 12 weeks. The cell activity is described through diffusion equations and regulated by the stress state of the structure. We compare our model to (i) histological observations and (ii) experimental data obtained from a mechanical test done on sacrificed animal. We show that our mechano-biological approach provides consistent numerical results and constitutes a useful tool to predict osteogenesis pattern.

  7. Quantitative assessment model for gastric cancer screening

    PubMed Central

    Chen, Kun; Yu, Wei-Ping; Song, Liang; Zhu, Yi-Min

    2005-01-01

    AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer. METHODS: A case control study was carried on in 66 patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food, etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD). RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively. According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%. Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P>0.05). CONCLUSION: The validity of this method is satisfactory. It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer. PMID:15655813

  8. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    NASA Astrophysics Data System (ADS)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  9. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation

  10. Quantitative self-assembly prediction yields targeted nanomedicines

    NASA Astrophysics Data System (ADS)

    Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.

    2018-02-01

    Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.

  11. Conceptual hierarchical modeling to describe wetland plant community organization

    USGS Publications Warehouse

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  12. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  13. A quantitative systems physiology model of renal function and blood pressure regulation: Model description.

    PubMed

    Hallow, K M; Gebremichael, Y

    2017-06-01

    Renal function plays a central role in cardiovascular, kidney, and multiple other diseases, and many existing and novel therapies act through renal mechanisms. Even with decades of accumulated knowledge of renal physiology, pathophysiology, and pharmacology, the dynamics of renal function remain difficult to understand and predict, often resulting in unexpected or counterintuitive therapy responses. Quantitative systems pharmacology modeling of renal function integrates this accumulated knowledge into a quantitative framework, allowing evaluation of competing hypotheses, identification of knowledge gaps, and generation of new experimentally testable hypotheses. Here we present a model of renal physiology and control mechanisms involved in maintaining sodium and water homeostasis. This model represents the core renal physiological processes involved in many research questions in drug development. The model runs in R and the code is made available. In a companion article, we present a case study using the model to explore mechanisms and pharmacology of salt-sensitive hypertension. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  14. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  15. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  16. A Transformative Model for Undergraduate Quantitative Biology Education

    PubMed Central

    Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  17. A Quantitative Model of Expert Transcription Typing

    DTIC Science & Technology

    1993-03-08

    side of pure psychology, several researchers have argued that transcription typing is a particularly good activity for the study of human skilled...phenomenon with a quantitative METT prediction. The first, quick and dirty analysis gives a good prediction of the copy span, in fact, it is even...typing, it should be demonstrated that the mechanism of the model does not get in the way of good predictions. If situations occur where the entire

  18. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  19. High-Content Screening for Quantitative Cell Biology.

    PubMed

    Mattiazzi Usaj, Mojca; Styles, Erin B; Verster, Adrian J; Friesen, Helena; Boone, Charles; Andrews, Brenda J

    2016-08-01

    High-content screening (HCS), which combines automated fluorescence microscopy with quantitative image analysis, allows the acquisition of unbiased multiparametric data at the single cell level. This approach has been used to address diverse biological questions and identify a plethora of quantitative phenotypes of varying complexity in numerous different model systems. Here, we describe some recent applications of HCS, ranging from the identification of genes required for specific biological processes to the characterization of genetic interactions. We review the steps involved in the design of useful biological assays and automated image analysis, and describe major challenges associated with each. Additionally, we highlight emerging technologies and future challenges, and discuss how the field of HCS might be enhanced in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Quantitative Systems Pharmacology: A Case for Disease Models.

    PubMed

    Musante, C J; Ramanujan, S; Schmidt, B J; Ghobrial, O G; Lu, J; Heatherington, A C

    2017-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model-informed drug discovery and development, supporting program decisions from exploratory research through late-stage clinical trials. In this commentary, we discuss the unique value of disease-scale "platform" QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. © 2016 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of The American Society for Clinical Pharmacology and Therapeutics.

  1. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  2. The stay/switch model describes choice among magnitudes of reinforcers.

    PubMed

    MacDonall, James S

    2008-06-01

    The stay/switch model is an alternative to the generalized matching law for describing choice in concurrent procedures. The purpose of the present experiment was to extend this model to choice among magnitudes of reinforcers. Rats were exposed to conditions in which the magnitude of reinforcers (number of food pellets) varied for staying at alternative 1, switching from alternative 1, staying at alternative 2 and switching from alternative 2. A changeover delay was not used. The results showed that the stay/switch model provided a good account of the data overall, and deviations from fits of the generalized matching law to response allocation data were in the direction predicted by the stay/switch model. In addition, comparisons among specific conditions suggested that varying the ratio of obtained reinforcers, as in the generalized matching law, was not necessary to change the response and time allocations. Other comparisons suggested that varying the ratio of obtained reinforcers was not sufficient to change response allocation. Taken together these results provide additional support for the stay/switch model of concurrent choice.

  3. An Integrated Qualitative and Quantitative Biochemical Model Learning Framework Using Evolutionary Strategy and Simulated Annealing.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    2015-01-01

    Both qualitative and quantitative model learning frameworks for biochemical systems have been studied in computational systems biology. In this research, after introducing two forms of pre-defined component patterns to represent biochemical models, we propose an integrative qualitative and quantitative modelling framework for inferring biochemical systems. In the proposed framework, interactions between reactants in the candidate models for a target biochemical system are evolved and eventually identified by the application of a qualitative model learning approach with an evolution strategy. Kinetic rates of the models generated from qualitative model learning are then further optimised by employing a quantitative approach with simulated annealing. Experimental results indicate that our proposed integrative framework is feasible to learn the relationships between biochemical reactants qualitatively and to make the model replicate the behaviours of the target system by optimising the kinetic rates quantitatively. Moreover, potential reactants of a target biochemical system can be discovered by hypothesising complex reactants in the synthetic models. Based on the biochemical models learned from the proposed framework, biologists can further perform experimental study in wet laboratory. In this way, natural biochemical systems can be better understood.

  4. Numerical model describing optimization of fibres winding process on open and closed frame

    NASA Astrophysics Data System (ADS)

    Petrů, M.; Mlýnek, J.; Martinec, T.

    2016-08-01

    This article discusses a numerical model describing optimization of fibres winding process on open and closed frame. The quality production of said type of composite frame depends primarily on the correct winding of fibers on a polyurethane core. It is especially needed to ensure the correct angles of the fibers winding on the polyurethane core and the homogeneity of individual winding layers. The article describes mathematical model for use an industrial robot in filament winding and how to calculate the trajectory of the robot. When winding fibers on the polyurethane core which is fastened to the robot-end-effector so that during the winding process goes through a fibre-processing head on the basis of the suitably determined robot-end-effector trajectory. We use the described numerical model and matrix calculus to enumerate the trajectory of the robot-end-effector to determine the desired passage of the frame through the fibre-processing head. The calculation of the trajectory was programmed in the Delphi development environment. Relations of the numerical model are important for use a real solving of the passage of a polyurethane core through fibre-processing head.

  5. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast

    PubMed Central

    Pang, Wei; Coghill, George M.

    2015-01-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377

  6. Statistical Model to Analyze Quantitative Proteomics Data Obtained by 18O/16O Labeling and Linear Ion Trap Mass Spectrometry

    PubMed Central

    Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús

    2009-01-01

    Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins

  7. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    NASA Astrophysics Data System (ADS)

    Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars

    2013-08-01

    There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.

  8. Novel mathematic models for quantitative transitivity of quality-markers in extraction process of the Buyanghuanwu decoction.

    PubMed

    Zhang, Yu-Tian; Xiao, Mei-Feng; Deng, Kai-Wen; Yang, Yan-Tao; Zhou, Yi-Qun; Zhou, Jin; He, Fu-Yuan; Liu, Wen-Long

    2018-06-01

    Nowadays, to research and formulate an efficiency extraction system for Chinese herbal medicine, scientists have always been facing a great challenge for quality management, so that the transitivity of Q-markers in quantitative analysis of TCM was proposed by Prof. Liu recently. In order to improve the quality of extraction from raw medicinal materials for clinical preparations, a series of integrated mathematic models for transitivity of Q-markers in quantitative analysis of TCM were established. Buyanghuanwu decoction (BYHWD) was a commonly TCMs prescription, which was used to prevent and treat the ischemic heart and brain diseases. In this paper, we selected BYHWD as an extraction experimental subject to study the quantitative transitivity of TCM. Based on theory of Fick's Rule and Noyes-Whitney equation, novel kinetic models were established for extraction of active components. Meanwhile, fitting out kinetic equations of extracted models and then calculating the inherent parameters in material piece and Q-marker quantitative transfer coefficients, which were considered as indexes to evaluate transitivity of Q-markers in quantitative analysis of the extraction process of BYHWD. HPLC was applied to screen and analyze the potential Q-markers in the extraction process. Fick's Rule and Noyes-Whitney equation were adopted for mathematically modeling extraction process. Kinetic parameters were fitted and calculated by the Statistical Program for Social Sciences 20.0 software. The transferable efficiency was described and evaluated by potential Q-markers transfer trajectory via transitivity availability AUC, extraction ratio P, and decomposition ratio D respectively. The Q-marker was identified with AUC, P, D. Astragaloside IV, laetrile, paeoniflorin, and ferulic acid were studied as potential Q-markers from BYHWD. The relative technologic parameters were presented by mathematic models, which could adequately illustrate the inherent properties of raw materials

  9. NOVEL MODEL DESCRIBING TRACE METAL CONCENTRATIONS IN THE EARTHWORM, EISENIA ANDREI

    EPA Science Inventory

    We developed a novel model describing Eisenia andrei body concentrations for Cd, Cu, Pb, and Zn as a function of pH, metals, and soluble organic carbon (SOC) in soil extracts for potential use in predicting values in contaminated field sites. Data from 17 moderately contaminated ...

  10. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins

    PubMed Central

    2016-01-01

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942

  11. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins.

    PubMed

    Lee, Sangyun; Liang, Ruibin; Voth, Gregory A; Swanson, Jessica M J

    2016-02-09

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput. 2014, 10, 2729-2737), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H(+)/Cl(-) antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins.

  12. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.

    PubMed

    Pang, Wei; Coghill, George M

    2015-05-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  13. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    PubMed

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  14. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  15. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  16. An integrated multidisciplinary model describing initiation of cancer and the Warburg hypothesis

    PubMed Central

    2013-01-01

    Background In this paper we propose a chemical physics mechanism for the initiation of the glycolytic switch commonly known as the Warburg hypothesis, whereby glycolytic activity terminating in lactate continues even in well-oxygenated cells. We show that this may result in cancer via mitotic failure, recasting the current conception of the Warburg effect as a metabolic dysregulation consequent to cancer, to a biophysical defect that may contribute to cancer initiation. Model Our model is based on analogs of thermodynamic concepts that tie non-equilibrium fluid dynamics ultimately to metabolic imbalance, disrupted microtubule dynamics, and finally, genomic instability, from which cancers can arise. Specifically, we discuss how an analog of non-equilibrium Rayleigh-Benard convection can result in glycolytic oscillations and cause a cell to become locked into a higher-entropy state characteristic of cancer. Conclusions A quantitative model is presented that attributes the well-known Warburg effect to a biophysical mechanism driven by a convective disturbance in the cell. Contrary to current understanding, this effect may precipitate cancer development, rather than follow from it, providing new insights into carcinogenesis, cancer treatment, and prevention. PMID:23758735

  17. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  18. Rift Valley fever trasmission dynamics described by compartmental models.

    PubMed

    Danzetta, Maria Luisa; Bruno, Rossana; Sauro, Francesca; Savini, Lara; Calistri, Paolo

    2016-11-01

    Rift Valley fever (RVF) is one of the most important zoonotic Transboundary Animal Diseases able to cross international borders and cause devastating effect on animal health and food security. Climate changes and the presence of competent vectors in the most of the current RVF-free temperate countries strongly support the inclusion of RVF virus (RVFV) among the most significant emerging viral threats for public and animal health. The transmission of RVFV is driven by complex eco-climatic factors making the epidemiology of RVF infection difficult to study and to understand. Mathematical, statistical and spatial models are often used to explain the mechanisms underlying these biological processes, providing new and effective tools to plan measures for public health protection. In this paper we performed a systematic literature review on RVF published papers with the aim of identifying and describing the most recent papers developing compartmental models for the study of RVFV transmission dynamics. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Preclinical MR fingerprinting (MRF) at 7 T: effective quantitative imaging for rodent disease models.

    PubMed

    Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A; Vincent, Jason A; Dell, Katherine M; Drumm, Mitchell L; Brady-Kalnay, Susann M; Griswold, Mark A; Flask, Chris A; Lu, Lan

    2015-03-01

    High-field preclinical MRI scanners are now commonly used to quantitatively assess disease status and the efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical 7.0-T MRI implementation of the highly novel MR fingerprinting (MRF) methodology which has been described previously for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a fast imaging with steady-state free precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 min. This initial high-field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for the quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Quantitative modeling of reservoir-triggered seismicity

    NASA Astrophysics Data System (ADS)

    Hainzl, S.; Catalli, F.; Dahm, T.; Heinicke, J.; Woith, H.

    2017-12-01

    Reservoir-triggered seismicity might occur as the response to the crustal stress caused by the poroelastic response to the weight of the water volume and fluid diffusion. Several cases of high correlations have been found in the past decades. However, crustal stresses might be altered by many other processes such as continuous tectonic stressing and coseismic stress changes. Because reservoir-triggered stresses decay quickly with distance, even tidal or rainfall-triggered stresses might be of similar size at depth. To account for simultaneous stress sources in a physically meaningful way, we apply a seismicity model based on calculated stress changes in the crust and laboratory-derived friction laws. Based on the observed seismicity, the model parameters can be determined by maximum likelihood method. The model leads to quantitative predictions of the variations of seismicity rate in space and time which can be used for hypothesis testing and forecasting. For case studies in Talala (India), Val d'Agri (Italy) and Novy Kostel (Czech Republic), we show the comparison of predicted and observed seismicity, demonstrating the potential and limitations of the approach.

  1. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  2. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  3. An extended car-following model to describe connected traffic dynamics under cyberattacks

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Yu, Guizhen; Wu, Xinkai; Qin, Hongmao; Wang, Yunpeng

    2018-04-01

    In this paper, the impacts of the potential cyberattacks on vehicles are modeled through an extended car-following model. To better understand the mechanism of traffic disturbance under cyberattacks, the linear and nonlinear stability analysis are conducted respectively. Particularly, linear stability analysis is performed to obtain different neutral stability conditions with various parameters; and nonlinear stability analysis is carried out by using reductive perturbation method to derive the soliton solution of the modified Korteweg de Vries equation (mKdV) near the critical point, which is used to draw coexisting stability lines. Furthermore, by applying linear and nonlinear stability analysis, traffic flow state can be divided into three states, i.e., stable, metastable and unstable states which are useful to describe shockwave dynamics and driving behaviors under cyberattacks. The theoretical results show that the proposed car-following model is capable of successfully describing the car-following behavior of connected vehicles with cyberattacks. Finally, numerical simulation using real values has confirmed the validity of theoretical analysis. The results further demonstrate our model can be used to help avoid collisions and relieve traffic congestion with cybersecurity threats.

  4. Genetic variation maintained in multilocus models of additive quantitative traits under stabilizing selection.

    PubMed Central

    Bürger, R; Gimelfarb, A

    1999-01-01

    Stabilizing selection for an intermediate optimum is generally considered to deplete genetic variation in quantitative traits. However, conflicting results from various types of models have been obtained. While classical analyses assuming a large number of independent additive loci with individually small effects indicated that no genetic variation is preserved under stabilizing selection, several analyses of two-locus models showed the contrary. We perform a complete analysis of a generalization of Wright's two-locus quadratic-optimum model and investigate numerically the ability of quadratic stabilizing selection to maintain genetic variation in additive quantitative traits controlled by up to five loci. A statistical approach is employed by choosing randomly 4000 parameter sets (allelic effects, recombination rates, and strength of selection) for a given number of loci. For each parameter set we iterate the recursion equations that describe the dynamics of gamete frequencies starting from 20 randomly chosen initial conditions until an equilibrium is reached, record the quantities of interest, and calculate their corresponding mean values. As the number of loci increases from two to five, the fraction of the genome expected to be polymorphic declines surprisingly rapidly, and the loci that are polymorphic increasingly are those with small effects on the trait. As a result, the genetic variance expected to be maintained under stabilizing selection decreases very rapidly with increased number of loci. The equilibrium structure expected under stabilizing selection on an additive trait differs markedly from that expected under selection with no constraints on genotypic fitness values. The expected genetic variance, the expected polymorphic fraction of the genome, as well as other quantities of interest, are only weakly dependent on the selection intensity and the level of recombination. PMID:10353920

  5. A quantitative description for efficient financial markets

    NASA Astrophysics Data System (ADS)

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  6. Kinetics model for initiation and promotion for describing tumor prevalence from HZE radiation

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wilson, John W.

    1994-01-01

    A kinetics model for cellular repair and misrepair for multiple radiation-induced lesions (mutation-inactivation) is coupled to a two-mutation model of initiation and promotion in tissue to provide a parametric description of tumor prevalence in the Harderian gland in a mouse. Dose-response curves are described for gamma-rays and relativistic ions. The effects of nuclear fragmentation are also considered for high-energy proton and alpha particle exposures The model described provides a parametric description of age-dependent cancer induction for a wide range of radiation fields. We also consider the two hypotheses that radiation acts either solely as an initiator or as both initiator and promoter and make model calculations for fractionation exposures from gamma-rays and relativistic Fe ions. For fractionated Fe exposures, an inverse dose-rate effect is provided by a promotion hypothesis using a mutation rate for promotion typical of single-gene mutations.

  7. A mathematical model describes the malignant transformation of low grade gliomas: Prognostic implications.

    PubMed

    Bogdańska, Magdalena U; Bodnar, Marek; Piotrowska, Monika J; Murek, Michael; Schucht, Philippe; Beck, Jürgen; Martínez-González, Alicia; Pérez-García, Víctor M

    2017-01-01

    Gliomas are the most frequent type of primary brain tumours. Low grade gliomas (LGGs, WHO grade II gliomas) may grow very slowly for the long periods of time, however they inevitably cause death due to the phenomenon known as the malignant transformation. This refers to the transition of LGGs to more aggressive forms of high grade gliomas (HGGs, WHO grade III and IV gliomas). In this paper we propose a mathematical model describing the spatio-temporal transition of LGGs into HGGs. Our modelling approach is based on two cellular populations with transitions between them being driven by the tumour microenvironment transformation occurring when the tumour cell density grows beyond a critical level. We show that the proposed model describes real patient data well. We discuss the relationship between patient prognosis and model parameters. We approximate tumour radius and velocity before malignant transformation as well as estimate the onset of this process.

  8. Describing, using 'recognition cones'. [parallel-series model with English-like computer program

    NASA Technical Reports Server (NTRS)

    Uhr, L.

    1973-01-01

    A parallel-serial 'recognition cone' model is examined, taking into account the model's ability to describe scenes of objects. An actual program is presented in an English-like language. The concept of a 'description' is discussed together with possible types of descriptive information. Questions regarding the level and the variety of detail are considered along with approaches for improving the serial representations of parallel systems.

  9. Hypothesized kinetic models for describing the growth of globular and encrusting demosponges.

    PubMed

    Sipkema, Detmer; Yosef, Nejla A M; Adamczewski, Marcin; Osinga, Ronald; Mendola, Dominick; Tramper, Johannes; Wijffels, René H

    2006-01-01

    The marine sponges Dysidea avara and Chondrosia reniformis (globular forms) were cultured in the laboratory on a diet of viable Phaeodactylum tricornutum cells and dissolved nutrients (algae and fish powders). Our growth data were combined with literature data for Pseudosuberites andrewsi (a globular sponge) and for the encrusting sponges Oscarella lobularis, Hemimycale columella, and Crambe crambe. The suitability of three growth models-linear, exponential, and radial accretive-for describing the growth of globular and encrusting sponges was assessed. Radial accretive growth was determined to be the best model to describe growth of both encrusting and globular sponges. Average growth rates of 0.051+/-0.016 and 0.019+/-0.003 mm/day (calculated as the increase of the radius of the sponge per day) were obtained experimentally for D. avara and C. reniformis, respectively.

  10. Quantitative Systems Pharmacology: A Case for Disease Models

    PubMed Central

    Ramanujan, S; Schmidt, BJ; Ghobrial, OG; Lu, J; Heatherington, AC

    2016-01-01

    Quantitative systems pharmacology (QSP) has emerged as an innovative approach in model‐informed drug discovery and development, supporting program decisions from exploratory research through late‐stage clinical trials. In this commentary, we discuss the unique value of disease‐scale “platform” QSP models that are amenable to reuse and repurposing to support diverse clinical decisions in ways distinct from other pharmacometrics strategies. PMID:27709613

  11. Fitting mathematical models to describe the rheological behaviour of chocolate pastes

    NASA Astrophysics Data System (ADS)

    Barbosa, Carla; Diogo, Filipa; Alves, M. Rui

    2016-06-01

    The flow behavior is of utmost importance for the chocolate industry. The objective of this work was to study two mathematical models, Casson and Windhab models that can be used to fit chocolate rheological data and evaluate which better infers or previews the rheological behaviour of different chocolate pastes. Rheological properties (viscosity, shear stress and shear rates) were obtained with a rotational viscometer equipped with a concentric cylinder. The chocolate samples were white chocolate and chocolate with varying percentages in cacao (55%, 70% and 83%). The results showed that the Windhab model was the best to describe the flow behaviour of all the studied samples with higher determination coefficients (r2 > 0.9).

  12. A Systematic Quantitative-Qualitative Model: How To Evaluate Professional Services

    ERIC Educational Resources Information Center

    Yoda, Koji

    1973-01-01

    The proposed evaluation model provides for the assignment of relative weights to each criterion, and establishes a weighting system for calculating a quantitative-qualitative raw score for each service activity of a faculty member being reviewed. (Author)

  13. Using the MWC model to describe heterotropic interactions in hemoglobin

    PubMed Central

    Rapp, Olga

    2017-01-01

    Hemoglobin is a classical model allosteric protein. Research on hemoglobin parallels the development of key cooperativity and allostery concepts, such as the ‘all-or-none’ Hill formalism, the stepwise Adair binding formulation and the concerted Monod-Wymann-Changuex (MWC) allosteric model. While it is clear that the MWC model adequately describes the cooperative binding of oxygen to hemoglobin, rationalizing the effects of H+, CO2 or organophosphate ligands on hemoglobin-oxygen saturation using the same model remains controversial. According to the MWC model, allosteric ligands exert their effect on protein function by modulating the quaternary conformational transition of the protein. However, data fitting analysis of hemoglobin oxygen saturation curves in the presence or absence of inhibitory ligands persistently revealed effects on both relative oxygen affinity (c) and conformational changes (L), elementary MWC parameters. The recent realization that data fitting analysis using the traditional MWC model equation may not provide reliable estimates for L and c thus calls for a re-examination of previous data using alternative fitting strategies. In the current manuscript, we present two simple strategies for obtaining reliable estimates for MWC mechanistic parameters of hemoglobin steady-state saturation curves in cases of both evolutionary and physiological variations. Our results suggest that the simple MWC model provides a reasonable description that can also account for heterotropic interactions in hemoglobin. The results, moreover, offer a general roadmap for successful data fitting analysis using the MWC model. PMID:28793329

  14. Observing Clonal Dynamics across Spatiotemporal Axes: A Prelude to Quantitative Fitness Models for Cancer.

    PubMed

    McPherson, Andrew W; Chan, Fong Chun; Shah, Sohrab P

    2018-02-01

    The ability to accurately model evolutionary dynamics in cancer would allow for prediction of progression and response to therapy. As a prelude to quantitative understanding of evolutionary dynamics, researchers must gather observations of in vivo tumor evolution. High-throughput genome sequencing now provides the means to profile the mutational content of evolving tumor clones from patient biopsies. Together with the development of models of tumor evolution, reconstructing evolutionary histories of individual tumors generates hypotheses about the dynamics of evolution that produced the observed clones. In this review, we provide a brief overview of the concepts involved in predicting evolutionary histories, and provide a workflow based on bulk and targeted-genome sequencing. We then describe the application of this workflow to time series data obtained for transformed and progressed follicular lymphomas (FL), and contrast the observed evolutionary dynamics between these two subtypes. We next describe results from a spatial sampling study of high-grade serous (HGS) ovarian cancer, propose mechanisms of disease spread based on the observed clonal mixtures, and provide examples of diversification through subclonal acquisition of driver mutations and convergent evolution. Finally, we state implications of the techniques discussed in this review as a necessary but insufficient step on the path to predictive modelling of disease dynamics. Copyright © 2018 Cold Spring Harbor Laboratory Press; all rights reserved.

  15. Human judgment vs. quantitative models for the management of ecological resources.

    PubMed

    Holden, Matthew H; Ellner, Stephen P

    2016-07-01

    Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed

  16. Non Linear Programming (NLP) Formulation for Quantitative Modeling of Protein Signal Transduction Pathways

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  17. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  18. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  19. Quantitative Adverse Outcome Pathways and Their ...

    EPA Pesticide Factsheets

    A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course predictions that can support regulatory decision-making. Herein we describe several facets of qAOPs, including (a) motivation for development, (b) technical considerations, (c) evaluation of confidence, and (d) potential applications. The qAOP used as an illustrative example for these points describes the linkage between inhibition of cytochrome P450 19A aromatase (the MIE) and population-level decreases in the fathead minnow (FHM; Pimephales promelas). The qAOP consists of three linked computational models for the following: (a) the hypothalamic-pitutitary-gonadal axis in female FHMs, where aromatase inhibition decreases the conversion of testosterone to 17β-estradiol (E2), thereby reducing E2-dependent vitellogenin (VTG; egg yolk protein precursor) synthesis, (b) VTG-dependent egg development and spawning (fecundity), and (c) fecundity-dependent population trajectory. While development of the example qAOP was based on experiments with FHMs exposed to the aromatase inhibitor fadrozole, we also show how a toxic equivalence (TEQ) calculation allows use of the qAOP to predict effects of another, untested aromatase inhibitor, iprodione. While qAOP development can be resource-intensive, the quan

  20. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  1. A theoretical model to describe progressions and regressions for exercise rehabilitation.

    PubMed

    Blanchard, Sam; Glasgow, Phil

    2014-08-01

    This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. INCAS: an analytical model to describe displacement cascades

    NASA Astrophysics Data System (ADS)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  3. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  4. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  5. A statistical model describing combined irreversible electroporation and electroporation-induced blood-brain barrier disruption.

    PubMed

    Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Mardor, Yael; Miklavcic, Damijan

    2016-03-01

    Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r(2) = 0.79; p < 0.008, r(2) = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup.

  6. A statistical model describing combined irreversible electroporation and electroporation-induced blood-brain barrier disruption

    PubMed Central

    Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Miklavcic, Damijan

    2016-01-01

    Background Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Material and methods Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Results Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r2 = 0.79; p < 0.008, r2 = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. Conclusions The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup. PMID:27069447

  7. Ordinal regression models to describe tourist satisfaction with Sintra's world heritage

    NASA Astrophysics Data System (ADS)

    Mouriño, Helena

    2013-10-01

    In Tourism Research, ordinal regression models are becoming a very powerful tool in modelling the relationship between an ordinal response variable and a set of explanatory variables. In August and September 2010, we conducted a pioneering Tourist Survey in Sintra, Portugal. The data were obtained by face-to-face interviews at the entrances of the Palaces and Parks of Sintra. The work developed in this paper focus on two main points: tourists' perception of the entrance fees; overall level of satisfaction with this heritage site. For attaining these goals, ordinal regression models were developed. We concluded that tourist's nationality was the only significant variable to describe the perception of the admission fees. Also, Sintra's image among tourists depends not only on their nationality, but also on previous knowledge about Sintra's World Heritage status.

  8. The mathematical research for the Kuramoto model of the describing neuronal synchrony in the brain

    NASA Astrophysics Data System (ADS)

    Lin, Chang; Lin, Mai-mai

    2009-08-01

    The Kuramoto model of the describing neuronal synchrony is mathematically investigated in the brain. A general analytical solutions (the most sententious description) for the Kuramoto model, incorporating the inclusion of a Ki,j (t) term to represent time-varying coupling strengths, have been obtained by using the precise mathematical approach. We derive an exact analytical expression, opening out the connotative and latent linear relation, for the mathematical character of the phase configurations in the Kuramoto model of the describing neuronal synchrony in the brain.

  9. Global Quantitative Modeling of Chromatin Factor Interactions

    PubMed Central

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  10. Preclinical Magnetic Resonance Fingerprinting (MRF) at 7 T: Effective Quantitative Imaging for Rodent Disease Models

    PubMed Central

    Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A.; Vincent, Jason A.; Dell, Katherine M.; Drumm, Mitchell L.; Brady-Kalnay, Susann M.; Griswold, Mark A.; Flask, Chris A.; Lu, Lan

    2015-01-01

    High field, preclinical magnetic resonance imaging (MRI) scanners are now commonly used to quantitatively assess disease status and efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical, 7.0 T MRI implementation of the highly novel Magnetic Resonance Fingerprinting (MRF) methodology that has been previously described for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a Fast Imaging with Steady-state Free Precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 minutes. This initial high field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. PMID:25639694

  11. Communication skills training: describing a new conceptual model.

    PubMed

    Brown, Richard F; Bylund, Carma L

    2008-01-01

    Current research in communication in physician-patient consultations is multidisciplinary and multimethodological. As this research has progressed, a considerable body of evidence on the best practices in physician-patient communication has been amassed. This evidence provides a foundation for communication skills training (CST) at all levels of medical education. Although the CST literature has demonstrated that communication skills can be taught, one critique of this literature is that it is not always clear which skills are being taught and whether those skills are matched with those being assessed. The Memorial Sloan-Kettering Cancer Center Comskil Model for CST seeks to answer those critiques by explicitly defining the important components of a consultation, based on Goals, Plans, and Actions theories and sociolinguistic theory. Sequenced guidelines as a mechanism for teaching about particular communication challenges are adapted from these other methods. The authors propose that consultation communication can be guided by an overarching goal, which is achieved through the use of a set of predetermined strategies. Strategies are common in CST; however, strategies often contain embedded communication skills. These skills can exist across strategies, and the Comskil Model seeks to make them explicit in these contexts. Separate from the skills are process tasks and cognitive appraisals that need to be addressed in teaching. The authors also describe how assessment practices foster concordance between skills taught and those assessed through careful coding of trainees' communication encounters and direct feedback.

  12. Parameter estimation in a human operator describing function model for a two-dimensional tracking task

    NASA Technical Reports Server (NTRS)

    Vanlunteren, A.

    1977-01-01

    A previously described parameter estimation program was applied to a number of control tasks, each involving a human operator model consisting of more than one describing function. One of these experiments is treated in more detail. It consisted of a two dimensional tracking task with identical controlled elements. The tracking errors were presented on one display as two vertically moving horizontal lines. Each loop had its own manipulator. The two forcing functions were mutually independent and consisted each of 9 sine waves. A human operator model was chosen consisting of 4 describing functions, thus taking into account possible linear cross couplings. From the Fourier coefficients of the relevant signals the model parameters were estimated after alignment, averaging over a number of runs and decoupling. The results show that for the elements in the main loops the crossover model applies. A weak linear cross coupling existed with the same dynamics as the elements in the main loops but with a negative sign.

  13. Describing complex cells in primary visual cortex: a comparison of context and multi-filter LN models.

    PubMed

    Westö, Johan; May, Patrick J C

    2018-05-02

    Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.

  14. An algorithm to detect and communicate the differences in computational models describing biological systems.

    PubMed

    Scharm, Martin; Wolkenhauer, Olaf; Waltemath, Dagmar

    2016-02-15

    Repositories support the reuse of models and ensure transparency about results in publications linked to those models. With thousands of models available in repositories, such as the BioModels database or the Physiome Model Repository, a framework to track the differences between models and their versions is essential to compare and combine models. Difference detection not only allows users to study the history of models but also helps in the detection of errors and inconsistencies. Existing repositories lack algorithms to track a model's development over time. Focusing on SBML and CellML, we present an algorithm to accurately detect and describe differences between coexisting versions of a model with respect to (i) the models' encoding, (ii) the structure of biological networks and (iii) mathematical expressions. This algorithm is implemented in a comprehensive and open source library called BiVeS. BiVeS helps to identify and characterize changes in computational models and thereby contributes to the documentation of a model's history. Our work facilitates the reuse and extension of existing models and supports collaborative modelling. Finally, it contributes to better reproducibility of modelling results and to the challenge of model provenance. The workflow described in this article is implemented in BiVeS. BiVeS is freely available as source code and binary from sems.uni-rostock.de. The web interface BudHat demonstrates the capabilities of BiVeS at budhat.sems.uni-rostock.de. © The Author 2015. Published by Oxford University Press.

  15. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    NASA Astrophysics Data System (ADS)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  16. Describing litho-constrained layout by a high-resolution model filter

    NASA Astrophysics Data System (ADS)

    Tsai, Min-Chun

    2008-05-01

    A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.

  17. Neuroergonomics: Quantitative Modeling of Individual, Shared, and Team Neurodynamic Information.

    PubMed

    Stevens, Ronald H; Galloway, Trysha L; Willemsen-Dunlap, Ann

    2018-06-01

    The aim of this study was to use the same quantitative measure and scale to directly compare the neurodynamic information/organizations of individual team members with those of the team. Team processes are difficult to separate from those of individual team members due to the lack of quantitative measures that can be applied to both process sets. Second-by-second symbolic representations were created of each team member's electroencephalographic power, and quantitative estimates of their neurodynamic organizations were calculated from the Shannon entropy of the symbolic data streams. The information in the neurodynamic data streams of health care ( n = 24), submarine navigation ( n = 12), and high school problem-solving ( n = 13) dyads was separated into the information of each team member, the information shared by team members, and the overall team information. Most of the team information was the sum of each individual's neurodynamic information. The remaining team information was shared among the team members. This shared information averaged ~15% of the individual information, with momentary levels of 1% to 80%. Continuous quantitative estimates can be made from the shared, individual, and team neurodynamic information about the contributions of different team members to the overall neurodynamic organization of a team and the neurodynamic interdependencies among the team members. Information models provide a generalizable quantitative method for separating a team's neurodynamic organization into that of individual team members and that shared among team members.

  18. Framework for a Quantitative Systemic Toxicity Model (FutureToxII)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  19. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  20. Predictive value of EEG in postanoxic encephalopathy: A quantitative model-based approach.

    PubMed

    Efthymiou, Evdokia; Renzel, Roland; Baumann, Christian R; Poryazova, Rositsa; Imbach, Lukas L

    2017-10-01

    The majority of comatose patients after cardiac arrest do not regain consciousness due to severe postanoxic encephalopathy. Early and accurate outcome prediction is therefore essential in determining further therapeutic interventions. The electroencephalogram is a standardized and commonly available tool used to estimate prognosis in postanoxic patients. The identification of pathological EEG patterns with poor prognosis relies however primarily on visual EEG scoring by experts. We introduced a model-based approach of EEG analysis (state space model) that allows for an objective and quantitative description of spectral EEG variability. We retrospectively analyzed standard EEG recordings in 83 comatose patients after cardiac arrest between 2005 and 2013 in the intensive care unit of the University Hospital Zürich. Neurological outcome was assessed one month after cardiac arrest using the Cerebral Performance Category. For a dynamic and quantitative EEG analysis, we implemented a model-based approach (state space analysis) to quantify EEG background variability independent from visual scoring of EEG epochs. Spectral variability was compared between groups and correlated with clinical outcome parameters and visual EEG patterns. Quantitative assessment of spectral EEG variability (state space velocity) revealed significant differences between patients with poor and good outcome after cardiac arrest: Lower mean velocity in temporal electrodes (T4 and T5) was significantly associated with poor prognostic outcome (p<0.005) and correlated with independently identified visual EEG patterns such as generalized periodic discharges (p<0.02). Receiver operating characteristic (ROC) analysis confirmed the predictive value of lower state space velocity for poor clinical outcome after cardiac arrest (AUC 80.8, 70% sensitivity, 15% false positive rate). Model-based quantitative EEG analysis (state space analysis) provides a novel, complementary marker for prognosis in postanoxic

  1. Tip-Enhanced Raman Voltammetry: Coverage Dependence and Quantitative Modeling.

    PubMed

    Mattei, Michael; Kang, Gyeongwon; Goubert, Guillaume; Chulhai, Dhabih V; Schatz, George C; Jensen, Lasse; Van Duyne, Richard P

    2017-01-11

    Electrochemical atomic force microscopy tip-enhanced Raman spectroscopy (EC-AFM-TERS) was employed for the first time to observe nanoscale spatial variations in the formal potential, E 0' , of a surface-bound redox couple. TERS cyclic voltammograms (TERS CVs) of single Nile Blue (NB) molecules were acquired at different locations spaced 5-10 nm apart on an indium tin oxide (ITO) electrode. Analysis of TERS CVs at different coverages was used to verify the observation of single-molecule electrochemistry. The resulting TERS CVs were fit to the Laviron model for surface-bound electroactive species to quantitatively extract the formal potential E 0' at each spatial location. Histograms of single-molecule E 0' at each coverage indicate that the electrochemical behavior of the cationic oxidized species is less sensitive to local environment than the neutral reduced species. This information is not accessible using purely electrochemical methods or ensemble spectroelectrochemical measurements. We anticipate that quantitative modeling and measurement of site-specific electrochemistry with EC-AFM-TERS will have a profound impact on our understanding of the role of nanoscale electrode heterogeneity in applications such as electrocatalysis, biological electron transfer, and energy production and storage.

  2. Application of describing function analysis to a model of deep brain stimulation.

    PubMed

    Davidson, Clare Muireann; de Paor, Annraoi M; Lowery, Madeleine M

    2014-03-01

    Deep brain stimulation effectively alleviates motor symptoms of medically refractory Parkinson's disease, and also relieves many other treatment-resistant movement and affective disorders. Despite its relative success as a treatment option, the basis of its efficacy remains elusive. In Parkinson's disease, increased functional connectivity and oscillatory activity occur within the basal ganglia as a result of dopamine loss. A correlative relationship between pathological oscillatory activity and the motor symptoms of the disease, in particular bradykinesia, rigidity, and tremor, has been established. Suppression of the oscillations by either dopamine replacement or DBS also correlates with an improvement in motor symptoms. DBS parameters are currently chosen empirically using a "trial and error" approach, which can be time-consuming and costly. The work presented here amalgamates concepts from theories of neural network modeling with nonlinear control engineering to describe and analyze a model of synchronous neural activity and applied stimulation. A theoretical expression for the optimum stimulation parameters necessary to suppress oscillations is derived. The effect of changing stimulation parameters (amplitude and pulse duration) on induced oscillations is studied in the model. Increasing either stimulation pulse duration or amplitude enhanced the level of suppression. The predicted parameters were found to agree well with clinical measurements reported in the literature for individual patients. It is anticipated that the simplified model described may facilitate the development of protocols to aid optimum stimulation parameter choice on a patient by patient basis.

  3. [A quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse].

    PubMed

    Zhang, Yu; Chen, Yuzhen; Hu, Chunguang; Zhang, Huaning; Bi, Zhenwang; Bi, Zhenqiang

    2015-05-01

    To construct a quantitative risk assessment model of salmonella on carcass in poultry slaughterhouse and to find out effective interventions to reduce salmonella contamination. We constructed a modular process risk model (MPRM) from evisceration to chilling in Excel Sheet using the data of the process parameters in poultry and the Salmomella concentration surveillance of Jinan in 2012. The MPRM was simulated by @ risk software. The concentration of salmonella on carcass after chilling was 1.96MPN/g which was calculated by model. The sensitive analysis indicated that the correlation coefficient of the concentration of salmonella after defeathering and in chilling pool were 0.84 and 0.34,which were the primary factors to the concentration of salmonella on carcass after chilling. The study provided a quantitative assessment model structure for salmonella on carcass in poultry slaughterhouse. The risk manager could control the contamination of salmonella on carcass after chilling by reducing the concentration of salmonella after defeathering and in chilling pool.

  4. A Quantitative Model of Motility Reveals Low-Dimensional Variation in Exploratory Behavior Across Multiple Nematode Species

    NASA Astrophysics Data System (ADS)

    Helms, Stephen; Avery, Leon; Stephens, Greg; Shimizu, Tom

    2014-03-01

    Animal behavior emerges from many layers of biological organization--from molecular signaling pathways and neuronal networks to mechanical outputs of muscles. In principle, the large number of interconnected variables at each of these layers could imply dynamics that are complex and hard to control or even tinker with. Yet, for organisms to survive in a competitive, ever-changing environment, behavior must readily adapt. We applied quantitative modeling to identify important aspects of behavior in chromadorean nematodes ranging from the lab strain C. elegans N2 to wild strains and distant species. We revealed subtle yet important features such as speed control and heavy-tailed directional changes. We found that the parameters describing this behavioral model varied among individuals and across species in a correlated way that is consistent with a trade-off between exploratory and exploitative behavior.

  5. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  6. Quantitative structure-activity relationship modeling on in vitro endocrine effects and metabolic stability involving 26 selected brominated flame retardants.

    PubMed

    Harju, Mikael; Hamers, Timo; Kamstra, Jorke H; Sonneveld, Edwin; Boon, Jan P; Tysklind, Mats; Andersson, Patrik L

    2007-04-01

    In this work, quantitative structure-activity relationships (QSARs) were developed to aid human and environmental risk assessment processes for brominated flame retardants (BFRs). Brominated flame retardants, such as the high-production-volume chemicals polybrominated diphenyl ethers (PBDEs), tetrabromobisphenol A, and hexabromocyclododecane, have been identified as potential endocrine disruptors. Quantitative structure-activity relationship models were built based on the in vitro potencies of 26 selected BFRs. The in vitro assays included interactions with, for example, androgen, progesterone, estrogen, and dioxin (aryl hydrocarbon) receptor, plus competition with thyroxine for its plasma carrier protein (transthyretin), inhibition of estradiol sulfation via sulfotransferase, and finally, rate of metabolization. The QSAR modeling, a number of physicochemical parameters were calculated describing the electronic, lipophilic, and structural characteristics of the molecules. These include frontier molecular orbitals, molecular charges, polarities, log octanol/water partitioning coefficient, and two- and three-dimensional molecularproperties. Experimental properties were included and measured for PBDEs, such as their individual ultraviolet spectra (200-320 nm) and retention times on three different high-performance liquid chromatography columns and one nonpolar gas chromatography column. Quantitative structure-activity relationship models based on androgen antagonism and metabolic degradation rates generally gave similar results, suggesting that lower-brominated PBDEs with bromine substitutions in ortho positions and bromine-free meta- and para positions had the highest potencies and metabolic degradation rates. Predictions made for the constituents of the technical flame retardant Bromkal 70-5DE found BDE 17 to be a potent androgen antagonist and BDE 66, which is a relevant PBDE in environmental samples, to be only a weak antagonist.

  7. How predictive quantitative modelling of tissue organisation can inform liver disease pathogenesis.

    PubMed

    Drasdo, Dirk; Hoehme, Stefan; Hengstler, Jan G

    2014-10-01

    From the more than 100 liver diseases described, many of those with high incidence rates manifest themselves by histopathological changes, such as hepatitis, alcoholic liver disease, fatty liver disease, fibrosis, and, in its later stages, cirrhosis, hepatocellular carcinoma, primary biliary cirrhosis and other disorders. Studies of disease pathogeneses are largely based on integrating -omics data pooled from cells at different locations with spatial information from stained liver structures in animal models. Even though this has led to significant insights, the complexity of interactions as well as the involvement of processes at many different time and length scales constrains the possibility to condense disease processes in illustrations, schemes and tables. The combination of modern imaging modalities with image processing and analysis, and mathematical models opens up a promising new approach towards a quantitative understanding of pathologies and of disease processes. This strategy is discussed for two examples, ammonia metabolism after drug-induced acute liver damage, and the recovery of liver mass as well as architecture during the subsequent regeneration process. This interdisciplinary approach permits integration of biological mechanisms and models of processes contributing to disease progression at various scales into mathematical models. These can be used to perform in silico simulations to promote unravelling the relation between architecture and function as below illustrated for liver regeneration, and bridging from the in vitro situation and animal models to humans. In the near future novel mechanisms will usually not be directly elucidated by modelling. However, models will falsify hypotheses and guide towards the most informative experimental design. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  8. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  9. [A new method of processing quantitative PCR data].

    PubMed

    Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun

    2003-05-01

    Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.

  10. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework

  11. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    PubMed

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  12. Quantitative Systems Pharmacology Modeling of Acid Sphingomyelinase Deficiency and the Enzyme Replacement Therapy Olipudase Alfa Is an Innovative Tool for Linking Pathophysiology and Pharmacology.

    PubMed

    Kaddi, Chanchala D; Niesner, Bradley; Baek, Rena; Jasper, Paul; Pappas, John; Tolsma, John; Li, Jing; van Rijn, Zachary; Tao, Mengdi; Ortemann-Renon, Catherine; Easton, Rachael; Tan, Sharon; Puga, Ana Cristina; Schuchman, Edward H; Barrett, Jeffrey S; Azer, Karim

    2018-06-19

    Acid sphingomyelinase deficiency (ASMD) is a rare lysosomal storage disorder with heterogeneous clinical manifestations, including hepatosplenomegaly and infiltrative pulmonary disease, and is associated with significant morbidity and mortality. Olipudase alfa (recombinant human acid sphingomyelinase) is an enzyme replacement therapy under development for the non-neurological manifestations of ASMD. We present a quantitative systems pharmacology (QSP) model supporting the clinical development of olipudase alfa. The model is multiscale and mechanistic, linking the enzymatic deficiency driving the disease to molecular-level, cellular-level, and organ-level effects. Model development was informed by natural history, and preclinical and clinical studies. By considering patient-specific pharmacokinetic (PK) profiles and indicators of disease severity, the model describes pharmacodynamic (PD) and clinical end points for individual patients. The ASMD QSP model provides a platform for quantitatively assessing systemic pharmacological effects in adult and pediatric patients, and explaining variability within and across these patient populations, thereby supporting the extrapolation of treatment response from adults to pediatrics. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  13. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...

  14. Physiologically motivated multiplex Kuramoto model describes phase diagram of cortical activity

    NASA Astrophysics Data System (ADS)

    Sadilek, Maximilian; Thurner, Stefan

    2015-05-01

    We derive a two-layer multiplex Kuramoto model from Wilson-Cowan type physiological equations that describe neural activity on a network of interconnected cortical regions. This is mathematically possible due to the existence of a unique, stable limit cycle, weak coupling, and inhibitory synaptic time delays. We study the phase diagram of this model numerically as a function of the inter-regional connection strength that is related to cerebral blood flow, and a phase shift parameter that is associated with synaptic GABA concentrations. We find three macroscopic phases of cortical activity: background activity (unsynchronized oscillations), epileptiform activity (highly synchronized oscillations) and resting-state activity (synchronized clusters/chaotic behaviour). Previous network models could hitherto not explain the existence of all three phases. We further observe a shift of the average oscillation frequency towards lower values together with the appearance of coherent slow oscillations at the transition from resting-state to epileptiform activity. This observation is fully in line with experimental data and could explain the influence of GABAergic drugs both on gamma oscillations and epileptic states. Compared to previous models for gamma oscillations and resting-state activity, the multiplex Kuramoto model not only provides a unifying framework, but also has a direct connection to measurable physiological parameters.

  15. Physiologically motivated multiplex Kuramoto model describes phase diagram of cortical activity.

    PubMed

    Sadilek, Maximilian; Thurner, Stefan

    2015-05-21

    We derive a two-layer multiplex Kuramoto model from Wilson-Cowan type physiological equations that describe neural activity on a network of interconnected cortical regions. This is mathematically possible due to the existence of a unique, stable limit cycle, weak coupling, and inhibitory synaptic time delays. We study the phase diagram of this model numerically as a function of the inter-regional connection strength that is related to cerebral blood flow, and a phase shift parameter that is associated with synaptic GABA concentrations. We find three macroscopic phases of cortical activity: background activity (unsynchronized oscillations), epileptiform activity (highly synchronized oscillations) and resting-state activity (synchronized clusters/chaotic behaviour). Previous network models could hitherto not explain the existence of all three phases. We further observe a shift of the average oscillation frequency towards lower values together with the appearance of coherent slow oscillations at the transition from resting-state to epileptiform activity. This observation is fully in line with experimental data and could explain the influence of GABAergic drugs both on gamma oscillations and epileptic states. Compared to previous models for gamma oscillations and resting-state activity, the multiplex Kuramoto model not only provides a unifying framework, but also has a direct connection to measurable physiological parameters.

  16. Dissecting Embryonic Stem Cell Self-Renewal and Differentiation Commitment from Quantitative Models.

    PubMed

    Hu, Rong; Dai, Xianhua; Dai, Zhiming; Xiang, Qian; Cai, Yanning

    2016-10-01

    To model quantitatively embryonic stem cell (ESC) self-renewal and differentiation by computational approaches, we developed a unified mathematical model for gene expression involved in cell fate choices. Our quantitative model comprised ESC master regulators and lineage-specific pivotal genes. It took the factors of multiple pathways as input and computed expression as a function of intrinsic transcription factors, extrinsic cues, epigenetic modifications, and antagonism between ESC master regulators and lineage-specific pivotal genes. In the model, the differential equations of expression of genes involved in cell fate choices from regulation relationship were established according to the transcription and degradation rates. We applied this model to the Murine ESC self-renewal and differentiation commitment and found that it modeled the expression patterns with good accuracy. Our model analysis revealed that Murine ESC was an attractor state in culture and differentiation was predominantly caused by antagonism between ESC master regulators and lineage-specific pivotal genes. Moreover, antagonism among lineages played a critical role in lineage reprogramming. Our results also uncovered that the ordered expression alteration of ESC master regulators over time had a central role in ESC differentiation fates. Our computational framework was generally applicable to most cell-type maintenance and lineage reprogramming.

  17. Getting quantitative about consequences of cross-ecosystem resource subsidies on recipient consumers

    USGS Publications Warehouse

    Richardson, John S.; Wipfli, Mark S.

    2016-01-01

    Most studies of cross-ecosystem resource subsidies have demonstrated positive effects on recipient consumer populations, often with very large effect sizes. However, it is important to move beyond these initial addition–exclusion experiments to consider the quantitative consequences for populations across gradients in the rates and quality of resource inputs. In our introduction to this special issue, we describe at least four potential models that describe functional relationships between subsidy input rates and consumer responses, most of them asymptotic. Here we aim to advance our quantitative understanding of how subsidy inputs influence recipient consumers and their communities. In the papers following, fish were either the recipient consumers or the subsidy as carcasses of anadromous species. Advancing general, predictive models will enable us to further consider what other factors are potentially co-limiting (e.g., nutrients, other population interactions, physical habitat, etc.) and better integrate resource subsidies into consumer–resource, biophysical dynamics models.

  18. A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes

    NASA Astrophysics Data System (ADS)

    Olsen, Seth

    2012-04-01

    We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.

  19. Wires in the soup: quantitative models of cell signaling

    PubMed Central

    Cheong, Raymond; Levchenko, Andre

    2014-01-01

    Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655

  20. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  1. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  2. Models to describe the thermal development rates of Cycloneda sanguinea L. (Coleoptera: Coccinelidae).

    PubMed

    Pachú, Jéssica Ks; Malaquias, José B; Godoy, Wesley Ac; de S Ramalho, Francisco; Almeida, Bruna R; Rossi, Fabrício

    2018-04-01

    Precise estimates of the lower (T min ) and higher (T max ) thermal thresholds as well as the temperature range that provides optimum performance (T opt ) enable to obtain the desired number of individuals in conservation systems, rearing and release of natural enemies. In this study, the relationship between the development rates of Cycloneda sanguinea L. (Coleoptera: Coccinelidae) and temperature was described using non-linear models developed by Analytis, Brière, Lactin, Lamb, Logan and Sharpe & DeMichele. There were differences between the models, considering the estimates of the parameters T min , T max , and T opt . All of the tested models were able to describe non-linear responses involving the development rates of C. sanguinea at constant temperatures. Lactin and Lamb gave the highest z weight for egg, while Analytis, Sharpe & DeMichele and Brière gave the highest values for larvae and pupae. The more realistic T opt estimated by the models varied from 29° to 31°C for egg, 27-28 °C for larvae and 28-29 °C for pupae. The Logan, Lactin and Analytis models estimated the T max for egg, larvae and pupae to be approximately 34 °C, while the T min estimated by the Analytis model was 16 °C for larvae and pupae. The information generated by our research will contribute towards improving the rearing and release of C. sanguinea in biological control programs, accurately controlling the rate of development in laboratory conditions or even scheduling the most favourable this species' release. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  4. A quantitative model for transforming reflectance spectra into the Munsell color space using cone sensitivity functions and opponent process weights.

    PubMed

    D'Andrade, Roy G; Romney, A Kimball

    2003-05-13

    This article presents a computational model of the process through which the human visual system transforms reflectance spectra into perceptions of color. Using physical reflectance spectra data and standard human cone sensitivity functions we describe the transformations necessary for predicting the location of colors in the Munsell color space. These transformations include quantitative estimates of the opponent process weights needed to transform cone activations into Munsell color space coordinates. Using these opponent process weights, the Munsell position of specific colors can be predicted from their physical spectra with a mean correlation of 0.989.

  5. A Quantitative Model of Early Atherosclerotic Plaques Parameterized Using In Vitro Experiments.

    PubMed

    Thon, Moritz P; Ford, Hugh Z; Gee, Michael W; Myerscough, Mary R

    2018-01-01

    There are a growing number of studies that model immunological processes in the artery wall that lead to the development of atherosclerotic plaques. However, few of these models use parameters that are obtained from experimental data even though data-driven models are vital if mathematical models are to become clinically relevant. We present the development and analysis of a quantitative mathematical model for the coupled inflammatory, lipid and macrophage dynamics in early atherosclerotic plaques. Our modeling approach is similar to the biologists' experimental approach where the bigger picture of atherosclerosis is put together from many smaller observations and findings from in vitro experiments. We first develop a series of three simpler submodels which are least-squares fitted to various in vitro experimental results from the literature. Subsequently, we use these three submodels to construct a quantitative model of the development of early atherosclerotic plaques. We perform a local sensitivity analysis of the model with respect to its parameters that identifies critical parameters and processes. Further, we present a systematic analysis of the long-term outcome of the model which produces a characterization of the stability of model plaques based on the rates of recruitment of low-density lipoproteins, high-density lipoproteins and macrophages. The analysis of the model suggests that further experimental work quantifying the different fates of macrophages as a function of cholesterol load and the balance between free cholesterol and cholesterol ester inside macrophages may give valuable insight into long-term atherosclerotic plaque outcomes. This model is an important step toward models applicable in a clinical setting.

  6. Quantitative and Functional Requirements for Bioluminescent Cancer Models.

    PubMed

    Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier

    2016-01-01

    Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  7. Applying Qualitative Hazard Analysis to Support Quantitative Safety Analysis for Proposed Reduced Wake Separation Conops

    NASA Technical Reports Server (NTRS)

    Shortle, John F.; Allocco, Michael

    2005-01-01

    This paper describes a scenario-driven hazard analysis process to identify, eliminate, and control safety-related risks. Within this process, we develop selective criteria to determine the applicability of applying engineering modeling to hypothesized hazard scenarios. This provides a basis for evaluating and prioritizing the scenarios as candidates for further quantitative analysis. We have applied this methodology to proposed concepts of operations for reduced wake separation for closely spaced parallel runways. For arrivals, the process identified 43 core hazard scenarios. Of these, we classified 12 as appropriate for further quantitative modeling, 24 that should be mitigated through controls, recommendations, and / or procedures (that is, scenarios not appropriate for quantitative modeling), and 7 that have the lowest priority for further analysis.

  8. Quantitative characterization of genetic parts and circuits for plant synthetic biology.

    PubMed

    Schaumberg, Katherine A; Antunes, Mauricio S; Kassaw, Tessema K; Xu, Wenlong; Zalewski, Christopher S; Medford, June I; Prasad, Ashok

    2016-01-01

    Plant synthetic biology promises immense technological benefits, including the potential development of a sustainable bio-based economy through the predictive design of synthetic gene circuits. Such circuits are built from quantitatively characterized genetic parts; however, this characterization is a significant obstacle in work with plants because of the time required for stable transformation. We describe a method for rapid quantitative characterization of genetic plant parts using transient expression in protoplasts and dual luciferase outputs. We observed experimental variability in transient-expression assays and developed a mathematical model to describe, as well as statistical normalization methods to account for, this variability, which allowed us to extract quantitative parameters. We characterized >120 synthetic parts in Arabidopsis and validated our method by comparing transient expression with expression in stably transformed plants. We also tested >100 synthetic parts in sorghum (Sorghum bicolor) protoplasts, and the results showed that our method works in diverse plant groups. Our approach enables the construction of tunable gene circuits in complex eukaryotic organisms.

  9. Strengthening Student Engagement with Quantitative Subjects in a Business Faculty

    ERIC Educational Resources Information Center

    Warwick, Jon; Howard, Anna

    2014-01-01

    This paper reflects on the results of research undertaken at a large UK university relating to the teaching of quantitative subjects within a Business Faculty. It builds on a simple model of student engagement and, through the description of three case studies, describes research undertaken and developments implemented to strengthen aspects of the…

  10. An orientation sensitive approach in biomolecule interaction quantitative structure-activity relationship modeling and its application in ion-exchange chromatography.

    PubMed

    Kittelmann, Jörg; Lang, Katharina M H; Ottens, Marcel; Hubbuch, Jürgen

    2017-01-27

    Quantitative structure-activity relationship (QSAR) modeling for prediction of biomolecule parameters has become an established technique in chromatographic purification process design. Unfortunately available descriptor sets fail to describe the orientation of biomolecules and the effects of ionic strength in the mobile phase on the interaction with the stationary phase. The literature describes several special descriptors used for chromatographic retention modeling, all of these do not describe the screening of electrostatic potential by the mobile phase in use. In this work we introduce two new approaches of descriptor calculations, namely surface patches and plane projection, which capture an oriented binding to charged surfaces and steric hindrance of the interaction with chromatographic ligands with regard to electrostatic potential screening by mobile phase ions. We present the use of the developed descriptor sets for predictive modeling of Langmuir isotherms for proteins at different pH values between pH 5 and 10 and varying ionic strength in the range of 10-100mM. The resulting model has a high correlation of calculated descriptors and experimental results, with a coefficient of determination of 0.82 and a predictive coefficient of determination of 0.92 for unknown molecular structures and conditions. The agreement of calculated molecular interaction orientations with both, experimental results as well as molecular dynamic simulations from literature is shown. The developed descriptors provide the means for improved QSAR models of chromatographic processes, as they reflect the complex interactions of biomolecules with chromatographic phases. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Proposal for a quantitative index of flood disasters.

    PubMed

    Feng, Lihua; Luo, Gaoyuan

    2010-07-01

    Drawing on calculations of wind scale and earthquake magnitude, this paper develops a new quantitative method for measuring flood magnitude and disaster intensity. Flood magnitude is the quantitative index that describes the scale of a flood; the flood's disaster intensity is the quantitative index describing the losses caused. Both indices have numerous theoretical and practical advantages with definable concepts and simple applications, which lend them key practical significance.

  12. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    PubMed

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  13. An extended model based on the modified Nernst-Planck equation for describing transdermal iontophoresis of weak electrolytes.

    PubMed

    Imanidis, Georgios; Luetolf, Peter

    2006-07-01

    An extended model for iontophoretic enhancement of transdermal drug permeation under constant voltage is described based on the previously modified Nernst-Planck equation, which included the effect of convective solvent flow. This model resulted in an analytical expression for the enhancement factor as a function of applied voltage, convective flow velocity due to electroosmosis, ratio of lipid to aqueous pathway passive permeability, and weighted average net ionic valence of the permeant in the aqueous epidermis domain. The shift of pH in the epidermis compared to bulk caused by the electrical double layer at the lipid-aqueous domain interface was evaluated using the Poisson-Boltzmann equation. This was solved numerically for representative surface charge densities and yielded pH differences between bulk and epidermal aqueous domain between 0.05 and 0.4 pH units. The developed model was used to analyze the experimental enhancement of an amphoteric weak electrolyte measured in vitro using human cadaver epidermis and a voltage of 250 mV at different pH values. Parameter values characterizing the involved factors were determined that yielded the experimental enhancement factors and passive permeability coefficients at all pH values. The model provided a very good agreement between experimental and calculated enhancement and passive permeability. The deduced parameters showed (i) that the pH shift in the aqueous permeation pathway had a notable effect on the ionic valence and the partitioning of the drug in this domain for a high surface charge density and depending on the pK(a) and pI of the drug in relation to the bulk pH; (ii) the magnitude and the direction of convective transport due to electroosmosis typically reflected the density and sign, respectively, of surface charge of the tissue and its effect on enhancement was substantial for bulk pH values differing from the pI of epidermal tissue; (iii) the aqueous pathway predominantly determined passive

  14. Qualitative versus quantitative methods in psychiatric research.

    PubMed

    Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S

    2012-01-01

    Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.

  15. Program to Produce Tabulated Data Set Describing NSWC Burn Model for Hydrodynamic Computations

    DTIC Science & Technology

    1990-09-11

    helpful insights of Dr. Raafat Guirguis of the Naval Surface Warfare Center on how the NSWC Burn Model works, and Drs. Schittke and Feisler of...R. Guirguis ) 1 R13 (P. Miller ) 1 R13 (K. Kin) 2 R13 (C. Coffey) 1 R13 (H. Sandusky) 1 R13 (D. Tasker) 1 R13 (E. Lanar) 1 R13 (J. Forbes) 1 R13 (R...NAVSWC TR 90-364 AD-A238 710 PROGRAM TO PRODUCE TABULATED DATA SET DESCRIBING NSWC BURN MODEL FOR HYDRODYNAMIC COMPUTATIONS BY LEWIS C. HUDSON III

  16. Muscle activation described with a differential equation model for large ensembles of locally coupled molecular motors.

    PubMed

    Walcott, Sam

    2014-10-01

    Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.

  17. A Mulitivariate Statistical Model Describing the Compound Nature of Soil Moisture Drought

    NASA Astrophysics Data System (ADS)

    Manning, Colin; Widmann, Martin; Bevacqua, Emanuele; Maraun, Douglas; Van Loon, Anne; Vrac, Mathieu

    2017-04-01

    Soil moisture in Europe acts to partition incoming energy into sensible and latent heat fluxes, thereby exerting a large influence on temperature variability. Soil moisture is predominantly controlled by precipitation and evapotranspiration. When these meteorological variables are accumulated over different timescales, their joint multivariate distribution and dependence structure can be used to provide information of soil moisture. We therefore consider soil moisture drought as a compound event of meteorological drought (deficits of precipitation) and heat waves, or more specifically, periods of high Potential Evapotraspiration (PET). We present here a statistical model of soil moisture based on Pair Copula Constructions (PCC) that can describe the dependence amongst soil moisture and its contributing meteorological variables. The model is designed in such a way that it can account for concurrences of meteorological drought and heat waves and describe the dependence between these conditions at a local level. The model is composed of four variables; daily soil moisture (h); a short term and a long term accumulated precipitation variable (Y1 and Y_2) that account for the propagation of meteorological drought to soil moisture drought; and accumulated PET (Y_3), calculated using the Penman Monteith equation, which can represent the effect of a heat wave on soil conditions. Copula are multivariate distribution functions that allow one to model the dependence structure of given variables separately from their marginal behaviour. PCCs then allow in theory for the formulation of a multivariate distribution of any dimension where the multivariate distribution is decomposed into a product of marginal probability density functions and two-dimensional copula, of which some are conditional. We apply PCC here in such a way that allows us to provide estimates of h and their uncertainty through conditioning on the Y in the form h=h|y_1,y_2,y_3 (1) Applying the model to various

  18. Quantitative computational models of molecular self-assembly in systems biology

    PubMed Central

    Thomas, Marcus; Schwartz, Russell

    2017-01-01

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally. PMID:28535149

  19. Quantitative computational models of molecular self-assembly in systems biology.

    PubMed

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  20. Corequisite Model: An Effective Strategy for Remediation in Freshmen Level Quantitative Reasoning Course

    ERIC Educational Resources Information Center

    Kashyap, Upasana; Mathew, Santhosh

    2017-01-01

    The purpose of this study was to compare students' performances in a freshmen level quantitative reasoning course (QR) under three different instructional models. A cohort of 155 freshmen students was placed in one of the three models: needing a prerequisite course, corequisite (students enroll simultaneously in QR course and a course that…

  1. Mini-Column Ion-Exchange Separation and Atomic Absorption Quantitation of Nickel, Cobalt, and Iron: An Undergraduate Quantitative Analysis Experiment.

    ERIC Educational Resources Information Center

    Anderson, James L.; And Others

    1980-01-01

    Presents an undergraduate quantitative analysis experiment, describing an atomic absorption quantitation scheme that is fast, sensitive and comparatively simple relative to other titration experiments. (CS)

  2. A novel quantitative model of cell cycle progression based on cyclin-dependent kinases activity and population balances.

    PubMed

    Pisu, Massimo; Concas, Alessandro; Cao, Giacomo

    2015-04-01

    Cell cycle regulates proliferative cell capacity under normal or pathologic conditions, and in general it governs all in vivo/in vitro cell growth and proliferation processes. Mathematical simulation by means of reliable and predictive models represents an important tool to interpret experiment results, to facilitate the definition of the optimal operating conditions for in vitro cultivation, or to predict the effect of a specific drug in normal/pathologic mammalian cells. Along these lines, a novel model of cell cycle progression is proposed in this work. Specifically, it is based on a population balance (PB) approach that allows one to quantitatively describe cell cycle progression through the different phases experienced by each cell of the entire population during its own life. The transition between two consecutive cell cycle phases is simulated by taking advantage of the biochemical kinetic model developed by Gérard and Goldbeter (2009) which involves cyclin-dependent kinases (CDKs) whose regulation is achieved through a variety of mechanisms that include association with cyclins and protein inhibitors, phosphorylation-dephosphorylation, and cyclin synthesis or degradation. This biochemical model properly describes the entire cell cycle of mammalian cells by maintaining a sufficient level of detail useful to identify check point for transition and to estimate phase duration required by PB. Specific examples are discussed to illustrate the ability of the proposed model to simulate the effect of drugs for in vitro trials of interest in oncology, regenerative medicine and tissue engineering. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Multiple methods for multiple futures: Integrating qualitative scenario planning and quantitative simulation modeling for natural resource decision making

    USGS Publications Warehouse

    Symstad, Amy J.; Fisichelli, Nicholas A.; Miller, Brian W.; Rowland, Erika; Schuurman, Gregor W.

    2017-01-01

    Scenario planning helps managers incorporate climate change into their natural resource decision making through a structured “what-if” process of identifying key uncertainties and potential impacts and responses. Although qualitative scenarios, in which ecosystem responses to climate change are derived via expert opinion, often suffice for managers to begin addressing climate change in their planning, this approach may face limits in resolving the responses of complex systems to altered climate conditions. In addition, this approach may fall short of the scientific credibility managers often require to take actions that differ from current practice. Quantitative simulation modeling of ecosystem response to climate conditions and management actions can provide this credibility, but its utility is limited unless the modeling addresses the most impactful and management-relevant uncertainties and incorporates realistic management actions. We use a case study to compare and contrast management implications derived from qualitative scenario narratives and from scenarios supported by quantitative simulations. We then describe an analytical framework that refines the case study’s integrated approach in order to improve applicability of results to management decisions. The case study illustrates the value of an integrated approach for identifying counterintuitive system dynamics, refining understanding of complex relationships, clarifying the magnitude and timing of changes, identifying and checking the validity of assumptions about resource responses to climate, and refining management directions. Our proposed analytical framework retains qualitative scenario planning as a core element because its participatory approach builds understanding for both managers and scientists, lays the groundwork to focus quantitative simulations on key system dynamics, and clarifies the challenges that subsequent decision making must address.

  4. Bayesian parameter estimation in spectral quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja

    2016-03-01

    Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.

  5. Quantitative Glycomics Strategies*

    PubMed Central

    Mechref, Yehia; Hu, Yunli; Desantos-Garcia, Janie L.; Hussein, Ahmed; Tang, Haixu

    2013-01-01

    The correlations between protein glycosylation and many biological processes and diseases are increasing the demand for quantitative glycomics strategies enabling sensitive monitoring of changes in the abundance and structure of glycans. This is currently attained through multiple strategies employing several analytical techniques such as capillary electrophoresis, liquid chromatography, and mass spectrometry. The detection and quantification of glycans often involve labeling with ionic and/or hydrophobic reagents. This step is needed in order to enhance detection in spectroscopic and mass spectrometric measurements. Recently, labeling with stable isotopic reagents has also been presented as a very viable strategy enabling relative quantitation. The different strategies available for reliable and sensitive quantitative glycomics are herein described and discussed. PMID:23325767

  6. Establishment of quantitative retention-activity model by optimized microemulsion liquid chromatography.

    PubMed

    Xu, Liyuan; Gao, Haoshi; Li, Liangxing; Li, Yinnong; Wang, Liuyun; Gao, Chongkai; Li, Ning

    2016-12-23

    The effective permeability coefficient is of theoretical and practical importance in evaluation of the bioavailability of drug candidates. However, most methods currently used to measure this coefficient are expensive and time-consuming. In this paper, we addressed these problems by proposing a new measurement method which is based on the microemulsion liquid chromatography. First, the parallel artificial membrane permeability assays model was used to determine the effective permeability of drug so that quantitative retention-activity relationships could be established, which were used to optimize the microemulsion liquid chromatography. The most effective microemulsion system used a mobile phase of 6.0% (w/w) Brij35, 6.6% (w/w) butanol, 0.8% (w/w) octanol, and 86.6% (w/w) phosphate buffer (pH 7.4). Next, support vector machine and back-propagation neural networks are employed to develop a quantitative retention-activity relationships model associated with the optimal microemulsion system, and used to improve the prediction ability. Finally, an adequate correlation between experimental value and predicted value is computed to verify the performance of the optimal model. The results indicate that the microemulsion liquid chromatography can serve as a possible alternative to the PAMPA method for determination of high-throughput permeability and simulation of biological processes. Copyright © 2016. Published by Elsevier B.V.

  7. Quantitative property-structural relation modeling on polymeric dielectric materials

    NASA Astrophysics Data System (ADS)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  8. Quantitative petri net model of gene regulated metabolic networks in the cell.

    PubMed

    Chen, Ming; Hofestädt, Ralf

    2011-01-01

    A method to exploit hybrid Petri nets (HPN) for quantitatively modeling and simulating gene regulated metabolic networks is demonstrated. A global kinetic modeling strategy and Petri net modeling algorithm are applied to perform the bioprocess functioning and model analysis. With the model, the interrelations between pathway analysis and metabolic control mechanism are outlined. Diagrammatical results of the dynamics of metabolites are simulated and observed by implementing a HPN tool, Visual Object Net ++. An explanation of the observed behavior of the urea cycle is proposed to indicate possibilities for metabolic engineering and medical care. Finally, the perspective of Petri nets on modeling and simulation of metabolic networks is discussed.

  9. Enhancing the Quantitative Representation of Socioeconomic Conditions in the Shared Socio-economic Pathways (SSPs) using the International Futures Model

    NASA Astrophysics Data System (ADS)

    Rothman, D. S.; Siraj, A.; Hughes, B.

    2013-12-01

    The international research community is currently in the process of developing new scenarios for climate change research. One component of these scenarios are the Shared Socio-economic Pathways (SSPs), which describe a set of possible future socioeconomic conditions. These are presented in narrative storylines with associated quantitative drivers. The core quantitative drivers include total population, average GDP per capita, educational attainment, and urbanization at the global, regional, and national levels. At the same time there have been calls, particularly by the IAV community, for the SSPs to include additional quantitative information on other key social factors, such as income inequality, governance, health, and access to key infrastructures, which are discussed in the narratives. The International Futures system (IFs), based at the Pardee Center at the University of Denver, is able to provide forecasts of many of these indicators. IFs cannot use the SSP drivers as exogenous inputs, but we are able to create development pathways that closely reproduce the core quantitative drivers defined by the different SSPs, as well as incorporating assumptions on other key driving factors described in the qualitative narratives. In this paper, we present forecasts for additional quantitative indicators based upon the implementation of the SSP development pathways in IFs. These results will be of value to many researchers.

  10. A mathematical model to describe the nonlinear elastic properties of the gastrocnemius tendon of chickens.

    PubMed

    Foutz, T L

    1991-03-01

    A phenomenological model was developed to describe the nonlinear elastic behavior of the avian gastrocnemius tendon. Quasistatic uniaxial tensile tests were used to apply a deformation and resulting load on the tendon at a deformation rate of 5 mm/min. Plots of deformation versus load indicated a nonlinear loading response. By calculating engineering stress and engineering strain, the experimental data were normalized for tendon shape. The elastic response was determined from stress-strain curves and was found to vary with engineering strain. The response to the applied engineering strain could best be described by a mathematical model that combined a linear function and a nonlinear function. Three parameters in the model were developed to represent the nonlinear elastic behavior of the tendon, thereby allowing analysis of elasticity without prior knowledge of engineering strain. This procedure reduced the amount of data needed for the statistical analysis of nonlinear elasticity.

  11. A Team Mental Model Perspective of Pre-Quantitative Risk

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  12. Quantitative Testing of Bedrock Incision Models, Clearwater River, WA

    NASA Astrophysics Data System (ADS)

    Tomkin, J. H.; Brandon, M.; Pazzaglia, F.; Barbour, J.; Willet, S.

    2001-12-01

    The topographic evolution of many active orogens is dominated by the process of bedrock channel incision. Several incision models based around the detachment limited shear-stress model (or stream power model) which employs an area (A) and slope (S) power law (E = K Sn Am) have been proposed to explain this process. They require quantitative assessment. We evaluate the proposed incision models by comparing their predictions with observations obtained from a river in a tectonically active mountain range: the Clearwater River in northwestern Washington State. Previous work on river terraces along the Clearwater have provided long-term incision rates for the river, and in conjunction with previous fission track studies it has also been determined that there is a long-term balance between river incision and rock uplift. This steady-state incision rate data allows us, through the use of inversion methods and statistical tests, to determine the applicability of the different incision models for the Clearwater. None of the models successfully explain the observations. This conclusion particularly applies to the commonly used detachment limited shear-stress model (or stream power model), which has a physically implausible best fit solution and systematic residuals for all the predicted combinations of m and n.

  13. Generalized Stoner-Wohlfarth model accurately describing the switching processes in pseudo-single ferromagnetic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cimpoesu, Dorin, E-mail: cdorin@uaic.ro; Stoleriu, Laurentiu; Stancu, Alexandru

    2013-12-14

    We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.

  14. Modeling approaches to describe H2O and CO2 exchange in mare ecosystems

    NASA Astrophysics Data System (ADS)

    Olchev, A.; Novenko, E.; Volkova, E.

    2012-04-01

    The modern climatic conditions is strongly influenced by both internal variability of climatic system, and various external natural and anthropogenic factors (IPCC 2007). Significant increase of concentration of greenhouse gases in the atmosphere and especially the growth of atmospheric CO2 due to human activity are considered as the main factors that are responsible for global warming and climate changes. A significant part of anthropogenic CO2 is absorbed from the atmosphere by land biota and especially by vegetation cover. However, it is still not completely clear what is the role of different land ecosystems and especially forests and mares in global cycles of H2O and CO2 and what is a sensitivity of these ecosystems to climate changes. Within the frameworks of this study the spatial and temporal variability of H2O and CO2 fluxes in different types of mare ecosystems of the forest-steppe zone in European part of Russia was described using modeling approaches and results of field measurements. For this modeling and experimental study the mare ecosystems of Tula region were selected. The Tula region is located mostly in the forest-steppe zone and it is unique area for such studies because almost all existed types of mare ecosystems of Northern Eurasia distinguished by a geomorphological position, water and mineral supply can be found there. Most mares in Tula region have a relatively small size and surrounded by very heterogeneous forests that make not possible an application of the classical measuring and modeling approaches e.g. an eddy covariance technique or one-dimensional H2O and CO2 exchange models for flux estimation in such sites. In our study to describe the radiation, sensible heat, H2O and CO2 exchange between such heterogeneous mare ecosystems and the atmosphere a three-dimensional model Forbog-3D and one-dimensional Mixfor-SVAT were applied. The main concept used in the Forbog-3D and Mixfor-SVAT models is an aggregated description of physical and

  15. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  16. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  17. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    PubMed Central

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  18. Symmetry breaking in (gravitating) scalar field models describing interacting boson stars and Q-balls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brihaye, Yves; Caebergs, Thierry; Hartmann, Betti

    2009-09-15

    We investigate the properties of interacting Q-balls and boson stars that sit on top of each other in great detail. The model that describes these solutions is essentially a (gravitating) two-scalar field model where both scalar fields are complex. We construct interacting Q-balls or boson stars with arbitrarily small charges but finite mass. We observe that in the interacting case--where the interaction can be either due to the potential or due to gravity--two types of solutions exist for equal frequencies: one for which the two-scalar fields are equal, but also one for which the two-scalar fields differ. This constitutes amore » symmetry breaking in the model. While for Q-balls asymmetric solutions have always corresponding symmetric solutions and are thus likely unstable to decay to symmetric solutions with lower energy, there exists a parameter regime for interacting boson stars, where only asymmetric solutions exist. We present the domain of existence for two interacting nonrotating solutions as well as for solutions describing the interaction between rotating and nonrotating Q-balls and boson stars, respectively.« less

  19. A generalised individual-based algorithm for modelling the evolution of quantitative herbicide resistance in arable weed populations.

    PubMed

    Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul

    2017-02-01

    Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  20. The use of mode of action information in risk assessment: quantitative key events/dose-response framework for modeling the dose-response for key events.

    PubMed

    Simon, Ted W; Simons, S Stoney; Preston, R Julian; Boobis, Alan R; Cohen, Samuel M; Doerrer, Nancy G; Fenner-Crisp, Penelope A; McMullin, Tami S; McQueen, Charlene A; Rowlands, J Craig

    2014-08-01

    The HESI RISK21 project formed the Dose-Response/Mode-of-Action Subteam to develop strategies for using all available data (in vitro, in vivo, and in silico) to advance the next-generation of chemical risk assessments. A goal of the Subteam is to enhance the existing Mode of Action/Human Relevance Framework and Key Events/Dose Response Framework (KEDRF) to make the best use of quantitative dose-response and timing information for Key Events (KEs). The resulting Quantitative Key Events/Dose-Response Framework (Q-KEDRF) provides a structured quantitative approach for systematic examination of the dose-response and timing of KEs resulting from a dose of a bioactive agent that causes a potential adverse outcome. Two concepts are described as aids to increasing the understanding of mode of action-Associative Events and Modulating Factors. These concepts are illustrated in two case studies; 1) cholinesterase inhibition by the pesticide chlorpyrifos, which illustrates the necessity of considering quantitative dose-response information when assessing the effect of a Modulating Factor, that is, enzyme polymorphisms in humans, and 2) estrogen-induced uterotrophic responses in rodents, which demonstrate how quantitative dose-response modeling for KE, the understanding of temporal relationships between KEs and a counterfactual examination of hypothesized KEs can determine whether they are Associative Events or true KEs.

  1. A quantitative test of population genetics using spatiogenetic patterns in bacterial colonies.

    PubMed

    Korolev, Kirill S; Xavier, João B; Nelson, David R; Foster, Kevin R

    2011-10-01

    It is widely accepted that population-genetics theory is the cornerstone of evolutionary analyses. Empirical tests of the theory, however, are challenging because of the complex relationships between space, dispersal, and evolution. Critically, we lack quantitative validation of the spatial models of population genetics. Here we combine analytics, on- and off-lattice simulations, and experiments with bacteria to perform quantitative tests of the theory. We study two bacterial species, the gut microbe Escherichia coli and the opportunistic pathogen Pseudomonas aeruginosa, and show that spatiogenetic patterns in colony biofilms of both species are accurately described by an extension of the one-dimensional stepping-stone model. We use one empirical measure, genetic diversity at the colony periphery, to parameterize our models and show that we can then accurately predict another key variable: the degree of short-range cell migration along an edge. Moreover, the model allows us to estimate other key parameters, including effective population size (density) at the expansion frontier. While our experimental system is a simplification of natural microbial community, we argue that it constitutes proof of principle that the spatial models of population genetics can quantitatively capture organismal evolution.

  2. A new universal dynamic model to describe eating rate and cumulative intake curves123

    PubMed Central

    Paynter, Jonathan; Peterson, Courtney M; Heymsfield, Steven B

    2017-01-01

    Background: Attempts to model cumulative intake curves with quadratic functions have not simultaneously taken gustatory stimulation, satiation, and maximal food intake into account. Objective: Our aim was to develop a dynamic model for cumulative intake curves that captures gustatory stimulation, satiation, and maximal food intake. Design: We developed a first-principles model describing cumulative intake that universally describes gustatory stimulation, satiation, and maximal food intake using 3 key parameters: 1) the initial eating rate, 2) the effective duration of eating, and 3) the maximal food intake. These model parameters were estimated in a study (n = 49) where eating rates were deliberately changed. Baseline data was used to determine the quality of model's fit to data compared with the quadratic model. The 3 parameters were also calculated in a second study consisting of restrained and unrestrained eaters. Finally, we calculated when the gustatory stimulation phase is short or absent. Results: The mean sum squared error for the first-principles model was 337.1 ± 240.4 compared with 581.6 ± 563.5 for the quadratic model, or a 43% improvement in fit. Individual comparison demonstrated lower errors for 94% of the subjects. Both sex (P = 0.002) and eating duration (P = 0.002) were associated with the initial eating rate (adjusted R2 = 0.23). Sex was also associated (P = 0.03 and P = 0.012) with the effective eating duration and maximum food intake (adjusted R2 = 0.06 and 0.11). In participants directed to eat as much as they could compared with as much as they felt comfortable with, the maximal intake parameter was approximately double the amount. The model found that certain parameter regions resulted in both stimulation and satiation phases, whereas others only produced a satiation phase. Conclusions: The first-principles model better quantifies interindividual differences in food intake, shows how aspects of food intake differ across subpopulations, and

  3. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions

    PubMed Central

    2014-01-01

    Background There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. Methods This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. Results The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. Conclusions The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. Trial registration number PROSPERO registration number: CRD42013004037. PMID:24885751

  4. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    PubMed

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  5. Quantitative studies of animal colour constancy: using the chicken as model

    PubMed Central

    2016-01-01

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations. PMID:27170714

  6. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    NASA Technical Reports Server (NTRS)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  7. Decompression sickness after air break in prebreathe described with a survival model.

    PubMed

    Conkin, Johnny

    2011-06-01

    A perception exists in aerospace that a brief interruption in a 100% oxygen prebreathe (PB) by breathing air has a substantial decompression sickness (DCS) consequence. The consequences of an air break during PB on the subsequent hypobaric DCS outcomes were evaluated. The hypothesis was that asymmetrical and not symmetrical nitrogen (N2) kinetics was best to model the distribution of subsequent DCS survival times after PBs that included air breaks. DCS survival times from 95 controls for a 60-min PB prior to 2- or 4-h exposures to 4.37 psia (9144 m; 30,000 ft) were analyzed along with 3 experimental conditions: 10-min air break (N = 40), 20-min air break (N = 40), or 60-min air break (N = 32) 30 min into the PB followed by 30 min of PB. Ascent rate was 1524 m x min(-1) and all 207 exposures included light exercise at 4.37 psia. Various computations of decompression dose were evaluated; either the difference or ratio of P1N2 and P2, where P1N2 was computed tissue N2 pressure to account for the PB and P2 was altitude pressure. Survival times were described with an accelerated log logistic model with asymmetrical N2 kinetics defining P1N2--P2 as best decompression dose. Exponential N2 uptake during the air break was described with a 10-min half time and N2 elimination during PB with a 60-min half time. A simple conclusion about compensation for air break is not possible because the duration and location of a break in a PB is variable. The resulting survival model is used to compute additional PB time to compensate for an air break in PB within the range of tested conditions.

  8. Computational modeling approaches to quantitative structure-binding kinetics relationships in drug discovery.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2018-03-21

    Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. The Transfer Function Model as a Tool to Study and Describe Space Weather Phenomena

    NASA Technical Reports Server (NTRS)

    Porter, Hayden S.; Mayr, Hans G.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    The Transfer Function Model (TFM) is a semi-analytical, linear model that is designed especially to describe thermospheric perturbations associated with magnetic storms and substorm. activity. It is a multi-constituent model (N2, O, He H, Ar) that accounts for wind induced diffusion, which significantly affects not only the composition and mass density but also the temperature and wind fields. Because the TFM adopts a semianalytic approach in which the geometry and temporal dependencies of the driving sources are removed through the use of height-integrated Green's functions, it provides physical insight into the essential properties of processes being considered, which are uncluttered by the accidental complexities that arise from particular source geometrie and time dependences. Extending from the ground to 700 km, the TFM eliminates spurious effects due to arbitrarily chosen boundary conditions. A database of transfer functions, computed only once, can be used to synthesize a wide range of spatial and temporal sources dependencies. The response synthesis can be performed quickly in real-time using only limited computing capabilities. These features make the TFM unique among global dynamical models. Given these desirable properties, a version of the TFM has been developed for personal computers (PC) using advanced platform-independent 3D visualization capabilities. We demonstrate the model capabilities with simulations for different auroral sources, including the response of ducted gravity waves modes that propagate around the globe. The thermospheric response is found to depend strongly on the spatial and temporal frequency spectra of the storm. Such varied behavior is difficult to describe in statistical empirical models. To improve the capability of space weather prediction, the TFM thus could be grafted naturally onto existing statistical models using data assimilation.

  10. The Quantitative-MFG Test: A Linear Mixed Effect Model to Detect Maternal-Offspring Gene Interactions.

    PubMed

    Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S

    2016-01-01

    Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.

  11. Quantitative Thermochronology

    NASA Astrophysics Data System (ADS)

    Braun, Jean; van der Beek, Peter; Batt, Geoffrey

    2006-05-01

    Thermochronology, the study of the thermal history of rocks, enables us to quantify the nature and timing of tectonic processes. Quantitative Thermochronology is a robust review of isotopic ages, and presents a range of numerical modeling techniques to allow the physical implications of isotopic age data to be explored. The authors provide analytical, semi-analytical, and numerical solutions to the heat transfer equation in a range of tectonic settings and under varying boundary conditions. They then illustrate their modeling approach built around a large number of case studies. The benefits of different thermochronological techniques are also described. Computer programs on an accompanying website at www.cambridge.org/9780521830577 are introduced through the text and provide a means of solving the heat transport equation in the deforming Earth to predict the ages of rocks and compare them directly to geological and geochronological data. Several short tutorials, with hints and solutions, are also included. Numerous case studies help geologists to interpret age data and relate it to Earth processes Essential background material to aid understanding and using thermochronological data Provides a thorough treatise on numerical modeling of heat transport in the Earth's crust Supported by a website hosting relevant computer programs and colour slides of figures from the book for use in teaching

  12. Three models intercomparison for Quantitative Precipitation Forecast over Calabria

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Lavagnini, A.; Accadia, C.; Mariani, S.; Casaioli, M.

    2004-11-01

    In the framework of the National Project “Sviluppo di distretti industriali per le Osservazioni della Terra” (Development of Industrial Districts for Earth Observations) funded by MIUR (Ministero dell'Università e della Ricerca Scientifica --Italian Ministry of the University and Scientific Research) two operational mesoscale models were set-up for Calabria, the southernmost tip of the Italian peninsula. Models are RAMS (Regional Atmospheric Modeling System) and MM5 (Mesoscale Modeling 5) that are run every day at Crati scrl to produce weather forecast over Calabria (http://www.crati.it). This paper reports model intercomparison for Quantitative Precipitation Forecast evaluated for a 20 month period from 1th October 2000 to 31th May 2002. In addition to RAMS and MM5 outputs, QBOLAM rainfall fields are available for the period selected and included in the comparison. This model runs operationally at “Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici”. Forecasts are verified comparing models outputs with raingauge data recorded by the regional meteorological network, which has 75 raingauges. Large-scale forcing is the same for all models considered and differences are due to physical/numerical parameterizations and horizontal resolutions. QPFs show differences between models. Largest differences are for BIA compared to the other considered scores. Performances decrease with increasing forecast time for RAMS and MM5, whilst QBOLAM scores better for second day forecast.

  13. Comparison of semi-quantitative and quantitative dynamic contrast-enhanced MRI evaluations of vertebral marrow perfusion in a rat osteoporosis model.

    PubMed

    Zhu, Jingqi; Xiong, Zuogang; Zhang, Jiulong; Qiu, Yuyou; Hua, Ting; Tang, Guangyu

    2017-11-14

    This study aims to investigate the technical feasibility of semi-quantitative and quantitative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the assessment of longitudinal changes of marrow perfusion in a rat osteoporosis model, using bone mineral density (BMD) measured by micro-computed tomography (micro-CT) and histopathology as the gold standards. Fifty rats were randomly assigned to the control group (n=25) and ovariectomy (OVX) group whose bilateral ovaries were excised (n=25). Semi-quantitative and quantitative DCE-MRI, micro-CT, and histopathological examinations were performed on lumbar vertebrae at baseline and 3, 6, 9, and 12 weeks after operation. The differences between the two groups in terms of semi-quantitative DCE-MRI parameter (maximum enhancement, E max ), quantitative DCE-MRI parameters (volume transfer constant, K trans ; interstitial volume, V e ; and efflux rate constant, K ep ), micro-CT parameter (BMD), and histopathological parameter (microvessel density, MVD) were compared at each of the time points using an independent-sample t test. The differences in these parameters between baseline and other time points in each group were assessed via Bonferroni's multiple comparison test. A Pearson correlation analysis was applied to assess the relationships between DCE-MRI, micro-CT, and histopathological parameters. In the OVX group, the E max values decreased significantly compared with those of the control group at weeks 6 and 9 (p=0.003 and 0.004, respectively). The K trans values decreased significantly compared with those of the control group from week 3 (p<0.05). However, the V e values decreased significantly only at week 9 (p=0.032), and no difference in the K ep was found between two groups. The BMD values of the OVX group decreased significantly compared with those of the control group from week 3 (p<0.05). Transmission electron microscopy showed tighter gaps between vascular endothelial cells with swollen mitochondria

  14. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    USDA-ARS?s Scientific Manuscript database

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  15. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    PubMed

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  16. A dynamical model for describing behavioural interventions for weight loss and body composition change

    PubMed Central

    Navarro-Barrientos, J.-Emeterio; Rivera, Daniel E.; Collins, Linda M.

    2011-01-01

    We present a dynamical model incorporating both physiological and psychological factors that predicts changes in body mass and composition during the course of a behavioral intervention for weight loss. The model consists of a three-compartment energy balance integrated with a mechanistic psychological model inspired by the Theory of Planned Behavior (TPB). The latter describes how important variables in a behavioural intervention can influence healthy eating habits and increased physical activity over time. The novelty of the approach lies in representing the behavioural intervention as a dynamical system, and the integration of the psychological and energy balance models. Two simulation scenarios are presented that illustrate how the model can improve the understanding of how changes in intervention components and participant differences affect outcomes. Consequently, the model can be used to inform behavioural scientists in the design of optimised interventions for weight loss and body composition change. PMID:21673826

  17. Suitability of parametric models to describe the hydraulic properties of an unsaturated coarse sand and gravel

    USGS Publications Warehouse

    Mace, Andy; Rudolph, David L.; Kachanoski , R. Gary

    1998-01-01

    The performance of parametric models used to describe soil water retention (SWR) properties and predict unsaturated hydraulic conductivity (K) as a function of volumetric water content (θ) is examined using SWR and K(θ) data for coarse sand and gravel sediments. Six 70 cm long, 10 cm diameter cores of glacial outwash were instrumented at eight depths with porous cup ten-siometers and time domain reflectometry probes to measure soil water pressure head (h) and θ, respectively, for seven unsaturated and one saturated steady-state flow conditions. Forty-two θ(h) and K(θ) relationships were measured from the infiltration tests on the cores. Of the four SWR models compared in the analysis, the van Genuchten (1980) equation with parameters m and n restricted according to the Mualem (m = 1 - 1/n) criterion is best suited to describe the θ(h) relationships. The accuracy of two models that predict K(θ) using parameter values derived from the SWR models was also evaluated. The model developed by van Genuchten (1980) based on the theoretical expression of Mualem (1976) predicted K(θ) more accurately than the van Genuchten (1980) model based on the theory of Burdine (1953). A sensitivity analysis shows that more accurate predictions of K(θ) are achieved using SWR model parameters derived with residual water content (θr) specified according to independent measurements of θ at values of h where θ/h ∼ 0 rather than model-fit θr values. The accuracy of the model K(θ) function improves markedly when at least one value of unsaturated K is used to scale the K(θ) function predicted using the saturated K. The results of this investigation indicate that the hydraulic properties of coarse-grained sediments can be accurately described using the parametric models. In addition, data collection efforts should focus on measuring at least one value of unsaturated hydraulic conductivity and as complete a set of SWR data as possible, particularly in the dry range.

  18. Quantitative stress measurement of elastic deformation using mechanoluminescent sensor: An intensity ratio model

    NASA Astrophysics Data System (ADS)

    Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng

    2018-04-01

    The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.

  19. Mathematical model describing the thyroids-pituitary axis with distributed time delays in hormone transportation

    NASA Astrophysics Data System (ADS)

    Neamţu, Mihaela; Stoian, Dana; Navolan, Dan Bogdan

    2014-12-01

    In the present paper we provide a mathematical model that describe the hypothalamus-pituitary-thyroid axis in autoimmune (Hashimoto's) thyroiditis. Since there is a spatial separation between thyroid and pituitary gland in the body, time is needed for transportation of thyrotropin and thyroxine between the glands. Thus, the distributed time delays are considered as both weak and Dirac kernels. The delayed model is analyzed regarding the stability and bifurcation behavior. The last part contains some numerical simulations to illustrate the effectiveness of our results and conclusions.

  20. From Inverse Problems in Mathematical Physiology to Quantitative Differential Diagnoses

    PubMed Central

    Zenker, Sven; Rubin, Jonathan; Clermont, Gilles

    2007-01-01

    The improved capacity to acquire quantitative data in a clinical setting has generally failed to improve outcomes in acutely ill patients, suggesting a need for advances in computer-supported data interpretation and decision making. In particular, the application of mathematical models of experimentally elucidated physiological mechanisms could augment the interpretation of quantitative, patient-specific information and help to better target therapy. Yet, such models are typically complex and nonlinear, a reality that often precludes the identification of unique parameters and states of the model that best represent available data. Hypothesizing that this non-uniqueness can convey useful information, we implemented a simplified simulation of a common differential diagnostic process (hypotension in an acute care setting), using a combination of a mathematical model of the cardiovascular system, a stochastic measurement model, and Bayesian inference techniques to quantify parameter and state uncertainty. The output of this procedure is a probability density function on the space of model parameters and initial conditions for a particular patient, based on prior population information together with patient-specific clinical observations. We show that multimodal posterior probability density functions arise naturally, even when unimodal and uninformative priors are used. The peaks of these densities correspond to clinically relevant differential diagnoses and can, in the simplified simulation setting, be constrained to a single diagnosis by assimilating additional observations from dynamical interventions (e.g., fluid challenge). We conclude that the ill-posedness of the inverse problem in quantitative physiology is not merely a technical obstacle, but rather reflects clinical reality and, when addressed adequately in the solution process, provides a novel link between mathematically described physiological knowledge and the clinical concept of differential diagnoses

  1. A Quantitative Review and Meta-Models of the Variability and Factors Affecting Oral Drug Absorption-Part I: Gastrointestinal pH.

    PubMed

    Abuhelwa, Ahmad Y; Foster, David J R; Upton, Richard N

    2016-09-01

    This study aimed to conduct a quantitative meta-analysis for the values of, and variability in, gastrointestinal (GI) pH in the different GI segments; characterize the effect of food on the values and variability in these parameters; and present quantitative meta-models of distributions of GI pH to help inform models of oral drug absorption. The literature was systemically reviewed for the values of, and the variability in, GI pH under fed and fasted conditions. The GI tract was categorized into the following 10 distinct regions: stomach (proximal, mid-distal), duodenum (proximal, mid-distal), jejunum and ileum (proximal, mid, and distal small intestine), and colon (ascending, transverse, and descending colon). Meta-analysis used the "metafor" package of the R language. The time course of postprandial stomach pH was modeled using NONMEM. Food significantly influenced the estimated meta-mean stomach and duodenal pH but had no significant influence on small intestinal and colonic pH. The time course of postprandial pH was described using an exponential model. Increased meal caloric content increased the extent and duration of postprandial gastric pH buffering. The different parts of the small intestine had significantly different pH. Colonic pH was significantly different for descending but not for ascending and transverse colon. Knowledge of GI pH is important for the formulation design of the pH-dependent dosage forms and in understanding the dissolution and absorption of orally administered drugs. The meta-models of GI pH may also be used as part of semi-physiological pharmacokinetic models to characterize the effect of GI pH on the in vivo drug release and pharmacokinetics.

  2. Quantitative NDE applied to composites and metals

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S.; Winfree, William P.; Parker, F. Raymond; Heath, D. Michele; Welch, Christopher S.

    1989-01-01

    Research at the NASA/Langley Research Center concerning quantitative NDE of composites and metals is reviewed. The relationship between ultrasonics and polymer cure is outlined. NDE models are presented, which can be used to develop measurement technologies for characterizing the curing of a polymer system for composite materials. The models can be used to determine the glass transition temperature, the degree of cure, and the cure rate. The application of the model to control autoclave processing of composite materials is noted. Consideration is given to the use of thermal diffusion models combined with controlled thermal input measurements to determine the thermal diffusivity of materials. Also, a two-dimensional physical model is described that permits delaminations in samples of Space Shuttle Solid Rocket Motors to be detected in thermograms in the presence of cooling effects and uneven heating.

  3. A mathematical model for describing the mechanical behaviour of root canal instruments.

    PubMed

    Zhang, E W; Cheung, G S P; Zheng, Y F

    2011-01-01

    The purpose of this study was to establish a general mathematical model for describing the mechanical behaviour of root canal instruments by combining a theoretical analytical approach with a numerical finite-element method. Mathematical formulas representing the longitudinal (taper, helical angle and pitch) and cross-sectional configurations and area, the bending and torsional inertia, the curvature of the boundary point and the (geometry of) loading condition were derived. Torsional and bending stresses and the resultant deformation were expressed mathematically as a function of these geometric parameters, modulus of elasticity of the material and the applied load. As illustrations, three brands of NiTi endodontic files of different cross-sectional configurations (ProTaper, Hero 642, and Mani NRT) were analysed under pure torsion and pure bending situation by entering the model into a finite-element analysis package (ANSYS). Numerical results confirmed that mathematical models were a feasible method to analyse the mechanical properties and predict the stress and deformation for root canal instruments during root canal preparation. Mathematical and numerical model can be a suitable way to examine mechanical behaviours as a criterion of the instrument design and to predict the stress and strain experienced by the endodontic instruments during root canal preparation. © 2010 International Endodontic Journal.

  4. Quantitative Reasoning in Problem Solving

    ERIC Educational Resources Information Center

    Ramful, Ajay; Ho, Siew Yin

    2015-01-01

    In this article, Ajay Ramful and Siew Yin Ho explain the meaning of quantitative reasoning, describing how it is used in the to solve mathematical problems. They also describe a diagrammatic approach to represent relationships among quantities and provide examples of problems and their solutions.

  5. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  6. Quantitative structure-property relationship (QSPR) modeling of drug-loaded polymeric micelles via genetic function approximation.

    PubMed

    Wu, Wensheng; Zhang, Canyang; Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan

    2015-01-01

    Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments.

  7. Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model

    NASA Astrophysics Data System (ADS)

    Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi

    2017-09-01

    Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.

  8. A quantitative framework for the forward design of synthetic miRNA circuits.

    PubMed

    Bloom, Ryan J; Winkler, Sally M; Smolke, Christina D

    2014-11-01

    Synthetic genetic circuits incorporating regulatory components based on RNA interference (RNAi) have been used in a variety of systems. A comprehensive understanding of the parameters that determine the relationship between microRNA (miRNA) and target expression levels is lacking. We describe a quantitative framework supporting the forward engineering of gene circuits that incorporate RNAi-based regulatory components in mammalian cells. We developed a model that captures the quantitative relationship between miRNA and target gene expression levels as a function of parameters, including mRNA half-life and miRNA target-site number. We extended the model to synthetic circuits that incorporate protein-responsive miRNA switches and designed an optimized miRNA-based protein concentration detector circuit that noninvasively measures small changes in the nuclear concentration of β-catenin owing to induction of the Wnt signaling pathway. Our results highlight the importance of methods for guiding the quantitative design of genetic circuits to achieve robust, reliable and predictable behaviors in mammalian cells.

  9. Using Metaphorical Models for Describing Glaciers

    ERIC Educational Resources Information Center

    Felzmann, Dirk

    2014-01-01

    To date, there has only been little conceptual change research regarding conceptions about glaciers. This study used the theoretical background of embodied cognition to reconstruct different metaphorical concepts with respect to the structure of a glacier. Applying the Model of Educational Reconstruction, the conceptions of students and scientists…

  10. SYSTEMS BIOLOGY MODEL DEVELOPMENT AND APPLICATION

    EPA Science Inventory

    System biology models holistically describe, in a quantitative fashion, the relationships between different levels of a biologic system. Relationships between individual components of a system are delineated. System biology models describe how the components of the system inter...

  11. Teaching quantitative biology: goals, assessments, and resources

    PubMed Central

    Aikens, Melissa L.; Dolan, Erin L.

    2014-01-01

    More than a decade has passed since the publication of BIO2010, calling for an increased emphasis on quantitative skills in the undergraduate biology curriculum. In that time, relatively few papers have been published that describe educational innovations in quantitative biology or provide evidence of their effects on students. Using a “backward design” framework, we lay out quantitative skill and attitude goals, assessment strategies, and teaching resources to help biologists teach more quantitatively. Collaborations between quantitative biologists and education researchers are necessary to develop a broader and more appropriate suite of assessment tools, and to provide much-needed evidence on how particular teaching strategies affect biology students' quantitative skill development and attitudes toward quantitative work. PMID:25368425

  12. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  13. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. Copyright © 2015 by the Genetics Society of America.

  14. Quantitative Finance

    NASA Astrophysics Data System (ADS)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  15. How to quantitatively evaluate safety of driver behavior upon accident? A biomechanical methodology

    PubMed Central

    Zhang, Wen; Cao, Jieer

    2017-01-01

    How to evaluate driver spontaneous reactions in various collision patterns in a quantitative way is one of the most important topics in vehicle safety. Firstly, this paper constructs representative numerical crash scenarios described by impact velocity, impact angle and contact position based on finite element (FE) computation platform. Secondly, a driver cabin model is extracted and described in the well validated multi-rigid body (MB) model to compute the value of weighted injury criterion to quantitatively assess drivers’ overall injury under certain circumstances. Furthermore, based on the coupling of FE and MB, parametric studies on various crash scenarios are conducted. It is revealed that the WIC (Weighted Injury Criteria) value variation law under high impact velocities is quite distinct comparing with the one in low impact velocities. In addition, the coupling effect can be elucidated by the fact that the difference of WIC value among three impact velocities under smaller impact angles tends to be distinctly higher than that under larger impact angles. Meanwhile, high impact velocity also increases the sensitivity of WIC under different collision positions and impact angles. Results may provide a new methodology to quantitatively evaluate driving behaviors and serve as a significant guiding step towards collision avoidance for autonomous driving vehicles. PMID:29240789

  16. How to quantitatively evaluate safety of driver behavior upon accident? A biomechanical methodology.

    PubMed

    Zhang, Wen; Cao, Jieer; Xu, Jun

    2017-01-01

    How to evaluate driver spontaneous reactions in various collision patterns in a quantitative way is one of the most important topics in vehicle safety. Firstly, this paper constructs representative numerical crash scenarios described by impact velocity, impact angle and contact position based on finite element (FE) computation platform. Secondly, a driver cabin model is extracted and described in the well validated multi-rigid body (MB) model to compute the value of weighted injury criterion to quantitatively assess drivers' overall injury under certain circumstances. Furthermore, based on the coupling of FE and MB, parametric studies on various crash scenarios are conducted. It is revealed that the WIC (Weighted Injury Criteria) value variation law under high impact velocities is quite distinct comparing with the one in low impact velocities. In addition, the coupling effect can be elucidated by the fact that the difference of WIC value among three impact velocities under smaller impact angles tends to be distinctly higher than that under larger impact angles. Meanwhile, high impact velocity also increases the sensitivity of WIC under different collision positions and impact angles. Results may provide a new methodology to quantitatively evaluate driving behaviors and serve as a significant guiding step towards collision avoidance for autonomous driving vehicles.

  17. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  18. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    PubMed Central

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  19. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    PubMed

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  20. Quantitative Structure-Property Relationship (QSPR) Modeling of Drug-Loaded Polymeric Micelles via Genetic Function Approximation

    PubMed Central

    Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan

    2015-01-01

    Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments. PMID:25780923

  1. How Mathematics Describes Life

    NASA Astrophysics Data System (ADS)

    Teklu, Abraham

    2017-01-01

    The circle of life is something we have all heard of from somewhere, but we don't usually try to calculate it. For some time we have been working on analyzing a predator-prey model to better understand how mathematics can describe life, in particular the interaction between two different species. The model we are analyzing is called the Holling-Tanner model, and it cannot be solved analytically. The Holling-Tanner model is a very common model in population dynamics because it is a simple descriptor of how predators and prey interact. The model is a system of two differential equations. The model is not specific to any particular set of species and so it can describe predator-prey species ranging from lions and zebras to white blood cells and infections. One thing all these systems have in common are critical points. A critical point is a value for both populations that keeps both populations constant. It is important because at this point the differential equations are equal to zero. For this model there are two critical points, a predator free critical point and a coexistence critical point. Most of the analysis we did is on the coexistence critical point because the predator free critical point is always unstable and frankly less interesting than the coexistence critical point. What we did is consider two regimes for the differential equations, large B and small B. B, A, and C are parameters in the differential equations that control the system where B measures how responsive the predators are to change in the population, A represents predation of the prey, and C represents the satiation point of the prey population. For the large B case we were able to approximate the system of differential equations by a single scalar equation. For the small B case we were able to predict the limit cycle. The limit cycle is a process of the predator and prey populations growing and shrinking periodically. This model has a limit cycle in the regime of small B, that we solved for

  2. Describing the clinical reasoning process: application of a model of enablement to a pediatric case.

    PubMed

    Furze, Jennifer; Nelson, Kelly; O'Hare, Megan; Ortner, Amanda; Threlkeld, A Joseph; Jensen, Gail M

    2013-04-01

    Clinical reasoning is a core tenet of physical therapy practice leading to optimal patient care. The purpose of this case was to describe the outcomes, subjective experience, and reflective clinical reasoning process for a child with cerebral palsy using the International Classification of Functioning, Disability, and Health (ICF) model. Application of the ICF framework to a 9-year-old boy with spastic triplegic cerebral palsy was utilized to capture the interwoven factors present in this case. Interventions in the pool occurred twice weekly for 1 h over a 10-week period. Immediately post and 4 months post-intervention, the child made functional and meaningful gains. The family unit also developed an enjoyment of exercising together. Each individual family member described psychological, emotional, or physical health improvements. Reflection using the ICF model as a framework to discuss clinical reasoning can highlight important factors contributing to effective patient management.

  3. [Quantitative relationship between gas chromatographic retention time and structural parameters of alkylphenols].

    PubMed

    Ruan, Xiaofang; Zhang, Ruisheng; Yao, Xiaojun; Liu, Mancang; Fan, Botao

    2007-03-01

    Alkylphenols are a group of permanent pollutants in the environment and could adversely disturb the human endocrine system. It is therefore important to effectively separate and measure the alkylphenols. To guide the chromatographic analysis of these compounds in practice, the development of quantitative relationship between the molecular structure and the retention time of alkylphenols becomes necessary. In this study, topological, constitutional, geometrical, electrostatic and quantum-chemical descriptors of 44 alkylphenols were calculated using a software, CODESSA, and these descriptors were pre-selected using the heuristic method. As a result, three-descriptor linear model (LM) was developed to describe the relationship between the molecular structure and the retention time of alkylphenols. Meanwhile, the non-linear regression model was also developed based on support vector machine (SVM) using the same three descriptors. The correlation coefficient (R(2)) for the LM and SVM was 0.98 and 0. 92, and the corresponding root-mean-square error was 0. 99 and 2. 77, respectively. By comparing the stability and prediction ability of the two models, it was found that the linear model was a better method for describing the quantitative relationship between the retention time of alkylphenols and the molecular structure. The results obtained suggested that the linear model could be applied for the chromatographic analysis of alkylphenols with known molecular structural parameters.

  4. Using Popular Culture to Teach Quantitative Reasoning

    ERIC Educational Resources Information Center

    Hillyard, Cinnamon

    2007-01-01

    Popular culture provides many opportunities to develop quantitative reasoning. This article describes a junior-level, interdisciplinary, quantitative reasoning course that uses examples from movies, cartoons, television, magazine advertisements, and children's literature. Some benefits from and cautions to using popular culture to teach…

  5. Reader's Response: Describing and Analyzing Quantitative Data

    ERIC Educational Resources Information Center

    McGrath, April L.

    2013-01-01

    The work of Harris and Martin (2012) on student motivations for choosing to complete online courses provides information on an important area of development within post-secondary education. As noted by the authors, online learning is an expanding field and learning more about why students choose online courses and their experiences in such courses…

  6. Electromagnetic braking: A simple quantitative model

    NASA Astrophysics Data System (ADS)

    Levin, Yan; da Silveira, Fernando L.; Rizzato, Felipe B.

    2006-09-01

    A calculation is presented that quantitatively accounts for the terminal velocity of a cylindrical magnet falling through a long copper or aluminum pipe. The experiment and the theory are a dramatic illustration of Faraday's and Lenz's laws.

  7. Quantitation of active pharmaceutical ingredients and excipients in powder blends using designed multivariate calibration models by near-infrared spectroscopy.

    PubMed

    Li, Weiyong; Worosila, Gregory D

    2005-05-13

    This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.

  8. Quantitative Model of Systemic Toxicity Using ToxCast and ToxRefDB (SOT)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  9. Quantitative 3D analysis of shape dynamics of the left ventricle

    NASA Astrophysics Data System (ADS)

    Scowen, Barry C.; Smith, Stephen L.; Vannan, Mani A.; Arsenault, Marie

    1998-07-01

    There is an established link between Left Ventricular (LV) geometry and its performance. As a consequence of ischemic heart disease and the attempt to relieve myocardial tissue stress, ventricle shape begins to distort from a conical to spherical geometry with a reduction in pumping efficiency of the chamber. If untreated, premature heart failure will result. To increase the changes of successful treatment it is obviously important for the benefit of the patient to detect these abnormalities as soon as possible. It is the development of a technique to characterize and quantify the shape of the left ventricle that is described here. The system described in this paper uses a novel helix model which combines the advantages of current two dimensional (2D) quantitative measures which provide limited information, with 3D qualitative methods which provide accurate reconstructions of the LV using computationally expensive rendering schemes. A phantom object and dog ventricle (normal/abnormal) were imaged and helical models constructed. The result are encouraging with differences between normal and abnormal ventricles in both diastole and systole able to be determined. Further work entails building a library of subjects in order to determine the relationship between ventricle geometry and quantitative measurements.

  10. A Minimal Model Describing Hexapedal Interlimb Coordination: The Tegotae-Based Approach

    PubMed Central

    Owaki, Dai; Goda, Masashi; Miyazawa, Sakiko; Ishiguro, Akio

    2017-01-01

    Insects exhibit adaptive and versatile locomotion despite their minimal neural computing. Such locomotor patterns are generated via coordination between leg movements, i.e., an interlimb coordination, which is largely controlled in a distributed manner by neural circuits located in thoracic ganglia. However, the mechanism responsible for the interlimb coordination still remains elusive. Understanding this mechanism will help us to elucidate the fundamental control principle of animals' agile locomotion and to realize robots with legs that are truly adaptive and could not be developed solely by conventional control theories. This study aims at providing a “minimal" model of the interlimb coordination mechanism underlying hexapedal locomotion, in the hope that a single control principle could satisfactorily reproduce various aspects of insect locomotion. To this end, we introduce a novel concept we named “Tegotae,” a Japanese concept describing the extent to which a perceived reaction matches an expectation. By using the Tegotae-based approach, we show that a surprisingly systematic design of local sensory feedback mechanisms essential for the interlimb coordination can be realized. We also use a hexapod robot we developed to show that our mathematical model of the interlimb coordination mechanism satisfactorily reproduces various insects' gait patterns. PMID:28649197

  11. Quantitative Primary Tumor Indocyanine Green Measurements Predict Osteosarcoma Metastatic Lung Burden in a Mouse Model.

    PubMed

    Fourman, Mitchell S; Mahjoub, Adel; Mandell, Jon B; Yu, Shibing; Tebbets, Jessica C; Crasto, Jared A; Alexander, Peter E; Weiss, Kurt R

    2018-03-01

    Current preclinical osteosarcoma (OS) models largely focus on quantifying primary tumor burden. However, most fatalities from OS are caused by metastatic disease. The quantification of metastatic OS currently relies on CT, which is limited by motion artifact, requires intravenous contrast, and can be technically demanding in the preclinical setting. We describe the ability for indocyanine green (ICG) fluorescence angiography to quantify primary and metastatic OS in a previously validated orthotopic, immunocompetent mouse model. (1) Can near-infrared ICG fluorescence be used to attach a comparable, quantitative value to the primary OS tumor in our experimental mouse model? (2) Will primary tumor fluorescence differ in mice that go on to develop metastatic lung disease? (3) Does primary tumor fluorescence correlate with tumor volume measured with CT? Six groups of 4- to 6-week-old immunocompetent Balb/c mice (n = 6 per group) received paraphyseal injections into their left hindlimb proximal tibia consisting of variable numbers of K7M2 mouse OS cells. A hindlimb transfemoral amputation was performed 4 weeks after injection with euthanasia and lung extraction performed 10 weeks after injection. Histologic examination of lung and primary tumor specimens confirmed ICG localization only within the tumor bed. Mice with visible or palpable tumor growth had greater hindlimb fluorescence (3.5 ± 2.3 arbitrary perfusion units [APU], defined as the fluorescence pixel return normalized by the detector) compared with those with a negative examination (0.71 ± 0.38 APU, -2.7 ± 0.5 mean difference, 95% confidence interval -3.7 to -1.8, p < 0.001). A strong linear trend (r = 0.81, p < 0.01) was observed between primary tumor and lung fluorescence, suggesting that quantitative ICG tumor fluorescence is directly related to eventual metastatic burden. We did not find a correlation (r = 0.04, p = 0.45) between normalized primary tumor fluorescence and CT volumetric measurements. We

  12. In vivo quantitative bioluminescence tomography using heterogeneous and homogeneous mouse models.

    PubMed

    Liu, Junting; Wang, Yabin; Qu, Xiaochao; Li, Xiangsi; Ma, Xiaopeng; Han, Runqiang; Hu, Zhenhua; Chen, Xueli; Sun, Dongdong; Zhang, Rongqing; Chen, Duofang; Chen, Dan; Chen, Xiaoyuan; Liang, Jimin; Cao, Feng; Tian, Jie

    2010-06-07

    Bioluminescence tomography (BLT) is a new optical molecular imaging modality, which can monitor both physiological and pathological processes by using bioluminescent light-emitting probes in small living animal. Especially, this technology possesses great potential in drug development, early detection, and therapy monitoring in preclinical settings. In the present study, we developed a dual modality BLT prototype system with Micro-computed tomography (MicroCT) registration approach, and improved the quantitative reconstruction algorithm based on adaptive hp finite element method (hp-FEM). Detailed comparisons of source reconstruction between the heterogeneous and homogeneous mouse models were performed. The models include mice with implanted luminescence source and tumor-bearing mice with firefly luciferase report gene. Our data suggest that the reconstruction based on heterogeneous mouse model is more accurate in localization and quantification than the homogeneous mouse model with appropriate optical parameters and that BLT allows super-early tumor detection in vivo based on tomographic reconstruction of heterogeneous mouse model signal.

  13. Flow assignment model for quantitative analysis of diverting bulk freight from road to railway

    PubMed Central

    Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian

    2017-01-01

    Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536

  14. Linear model describing three components of flow in karst aquifers using 18O data

    USGS Publications Warehouse

    Long, Andrew J.; Putnam, L.D.

    2004-01-01

    The stable isotope of oxygen, 18O, is used as a naturally occurring ground-water tracer. Time-series data for ??18O are analyzed to model the distinct responses and relative proportions of the conduit, intermediate, and diffuse flow components in karst aquifers. This analysis also describes mathematically the dynamics of the transient fluid interchange between conduits and diffusive networks. Conduit and intermediate flow are described by linear-systems methods, whereas diffuse flow is described by mass-balance methods. An automated optimization process estimates parameters of lognormal, Pearson type III, and gamma distributions, which are used as transfer functions in linear-systems analysis. Diffuse flow and mixing parameters also are estimated by these optimization methods. Results indicate the relative proximity of a well to a main conduit flowpath and can help to predict the movement and residence times of potential contaminants. The three-component linear model is applied to five wells, which respond to changes in the isotopic composition of point recharge water from a sinking stream in the Madison aquifer in the Black Hills of South Dakota. Flow velocities as much as 540 m/d and system memories of as much as 71 years are estimated by this method. Also, the mean, median, and standard deviation of traveltimes; time to peak response; and the relative fraction of flow for each of the three components are determined for these wells. This analysis infers that flow may branch apart and rejoin as a result of an anastomotic (or channeled) karst network.

  15. SYN-JEM: A Quantitative Job-Exposure Matrix for Five Lung Carcinogens.

    PubMed

    Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans

    2016-08-01

    The use of measurement data in occupational exposure assessment allows more quantitative analyses of possible exposure-response relations. We describe a quantitative exposure assessment approach for five lung carcinogens (i.e. asbestos, chromium-VI, nickel, polycyclic aromatic hydrocarbons (by its proxy benzo(a)pyrene (BaP)) and respirable crystalline silica). A quantitative job-exposure matrix (JEM) was developed based on statistical modeling of large quantities of personal measurements. Empirical linear models were developed using personal occupational exposure measurements (n = 102306) from Europe and Canada, as well as auxiliary information like job (industry), year of sampling, region, an a priori exposure rating of each job (none, low, and high exposed), sampling and analytical methods, and sampling duration. The model outcomes were used to create a JEM with a quantitative estimate of the level of exposure by job, year, and region. Decreasing time trends were observed for all agents between the 1970s and 2009, ranging from -1.2% per year for personal BaP and nickel exposures to -10.7% for asbestos (in the time period before an asbestos ban was implemented). Regional differences in exposure concentrations (adjusted for measured jobs, years of measurement, and sampling method and duration) varied by agent, ranging from a factor 3.3 for chromium-VI up to a factor 10.5 for asbestos. We estimated time-, job-, and region-specific exposure levels for four (asbestos, chromium-VI, nickel, and RCS) out of five considered lung carcinogens. Through statistical modeling of large amounts of personal occupational exposure measurement data we were able to derive a quantitative JEM to be used in community-based studies. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  16. The Metaphors That Elementary School Students Use to Describe the Term "Teacher"

    ERIC Educational Resources Information Center

    Karadag, Ruhan; Gültekin, Mehmet

    2012-01-01

    The aim of this study is to investigate metaphors that elementary school 5th and 8th grade students (N = 567) use in order to describe the term "teacher". The data were collected using a questionnaire consisting of open-ended questions, and analyzed using qualitative and quantitative analysis techniques. Content analysis technique was…

  17. Quantitative Evaluation of Performance during Robot-assisted Treatment.

    PubMed

    Peri, E; Biffi, E; Maghini, C; Servodio Iammarrone, F; Gagliardi, C; Germiniasi, C; Pedrocchi, A; Turconi, A C; Reni, G

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The great potential of robots in extracting quantitative and meaningful data is not always exploited in clinical practice. The aim of the present work is to describe a simple parameter to assess the performance of subjects during upper limb robotic training exploiting data automatically recorded by the robot, with no additional effort for patients and clinicians. Fourteen children affected by cerebral palsy (CP) performed a training with Armeo®Spring. Each session was evaluated with P, a simple parameter that depends on the overall performance recorded, and median and interquartile values were computed to perform a group analysis. Median (interquartile) values of P significantly increased from 0.27 (0.21) at T0 to 0.55 (0.27) at T1 . This improvement was functionally validated by a significant increase of the Melbourne Assessment of Unilateral Upper Limb Function. The parameter described here was able to show variations in performance over time and enabled a quantitative evaluation of motion abilities in a way that is reliable with respect to a well-known clinical scale.

  18. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    PubMed

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  19. Quantifying Zika: Advancing the Epidemiology of Zika With Quantitative Models.

    PubMed

    Keegan, Lindsay T; Lessler, Justin; Johansson, Michael A

    2017-12-16

    When Zika virus (ZIKV) emerged in the Americas, little was known about its biology, pathogenesis, and transmission potential, and the scope of the epidemic was largely hidden, owing to generally mild infections and no established surveillance systems. Surges in congenital defects and Guillain-Barré syndrome alerted the world to the danger of ZIKV. In the context of limited data, quantitative models were critical in reducing uncertainties and guiding the global ZIKV response. Here, we review some of the models used to assess the risk of ZIKV-associated severe outcomes, the potential speed and size of ZIKV epidemics, and the geographic distribution of ZIKV risk. These models provide important insights and highlight significant unresolved questions related to ZIKV and other emerging pathogens. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  20. Decompression Sickness After Air Break in Prebreathe Described with a Survival Model

    NASA Technical Reports Server (NTRS)

    Conkin, J.; Pilmanis, A. A.

    2010-01-01

    Data from Brooks City-Base show the decompression sickness (DCS) and venous gas emboli (VGE) consequences of air breaks in a resting 100% O2 prebreathe (PB) prior to a hypobaric exposure. METHODS: DCS and VGE survival times from 95 controls for a 60 min PB prior to 2-hr or 4-hr exposures to 4.37 psia are statistically compared to 3 break in PB conditions: a 10 min (n=40), 20 min (n=40), or 60 min break (n=32) 30 min into the PB followed by 30 min of PB. Ascent rate was 1,524 meters / min and all exposures included light exercise and 4 min of VGE monitoring of heart chambers at 16 min intervals. DCS survival time for combined control and air breaks were described with an accelerated log logistic model where exponential N2 washin during air break was described with a 10 min half-time and washout during PB with a 60 min half-time. RESULTS: There was no difference in VGE or DCS survival times among 3 different air breaks, or when air breaks were compared to control VGE times. However, 10, 20, and 60 min air breaks had significantly earlier survival times compared to control DCS times, certainly early in the exposures. CONCLUSION: Air breaks of 10, 20, and 60 min after 30 min of a 60 min PB reduced DCS survival time. The survival model combined discrete comparisons into a global description mechanistically linked to asymmetrical N2 washin and washout kinetics based on inspired pN2. Our unvalidated regression is used to compute additional PB time needed to compensate for an air break in PB within the range of tested conditions.

  1. Applying Knowledge of Quantitative Design and Analysis

    ERIC Educational Resources Information Center

    Baskas, Richard S.

    2011-01-01

    This study compared and contrasted two quantitative scholarly articles in relation to their research designs. Their designs were analyzed by the comparison of research references and research specific vocabulary to describe how various research methods were used. When researching and analyzing quantitative scholarly articles, it is imperative to…

  2. Quantitative Relationships Involving Additive Differences: Numerical Resilience

    ERIC Educational Resources Information Center

    Ramful, Ajay; Ho, Siew Yin

    2014-01-01

    This case study describes the ways in which problems involving additive differences with unknown starting quantities, constrain the problem solver in articulating the inherent quantitative relationship. It gives empirical evidence to show how numerical reasoning takes over as a Grade 6 student instantiates the quantitative relation by resorting to…

  3. A spatial age-structured model for describing sea lamprey (Petromyzon marinus) population dynamics

    USGS Publications Warehouse

    Robinson, Jason M.; Wilberg, Michael J.; Adams, Jean V.; Jones, Michael L.

    2013-01-01

    The control of invasive sea lampreys (Petromyzon marinus) presents large scale management challenges in the Laurentian Great Lakes. No modeling approach has been developed that describes spatial dynamics of lamprey populations. We developed and validated a spatial and age-structured model and applied it to a sea lamprey population in a large river in the Great Lakes basin. We considered 75 discrete spatial areas, included a stock-recruitment function, spatial recruitment patterns, natural mortality, chemical treatment mortality, and larval metamorphosis. Recruitment was variable, and an upstream shift in recruitment location was observed over time. From 1993–2011 recruitment, larval abundance, and the abundance of metamorphosing individuals decreased by 80, 84, and 86%, respectively. The model successfully identified areas of high larval abundance and showed that areas of low larval density contribute significantly to the population. Estimated treatment mortality was less than expected but had a large population-level impact. The results and general approach of this work have applications for sea lamprey control throughout the Great Lakes and for the restoration and conservation of native lamprey species globally.

  4. Quantitative gene-gene and gene-environment mapping for leaf shape variation using tree-based models.

    PubMed

    Fu, Guifang; Dai, Xiaotian; Symanzik, Jürgen; Bushman, Shaun

    2017-01-01

    Leaf shape traits have long been a focus of many disciplines, but the complex genetic and environmental interactive mechanisms regulating leaf shape variation have not yet been investigated in detail. The question of the respective roles of genes and environment and how they interact to modulate leaf shape is a thorny evolutionary problem, and sophisticated methodology is needed to address it. In this study, we investigated a framework-level approach that inputs shape image photographs and genetic and environmental data, and then outputs the relative importance ranks of all variables after integrating shape feature extraction, dimension reduction, and tree-based statistical models. The power of the proposed framework was confirmed by simulation and a Populus szechuanica var. tibetica data set. This new methodology resulted in the detection of novel shape characteristics, and also confirmed some previous findings. The quantitative modeling of a combination of polygenetic, plastic, epistatic, and gene-environment interactive effects, as investigated in this study, will improve the discernment of quantitative leaf shape characteristics, and the methods are ready to be applied to other leaf morphology data sets. Unlike the majority of approaches in the quantitative leaf shape literature, this framework-level approach is data-driven, without assuming any pre-known shape attributes, landmarks, or model structures. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  5. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.

  6. Describing different brain computer interface systems through a unique model: a UML implementation.

    PubMed

    Quitadamo, Lucia Rita; Marciani, Maria Grazia; Cardarilli, Gian Carlo; Bianchi, Luigi

    2008-01-01

    All the protocols currently implemented in brain computer interface (BCI) experiments are characterized by different structural and temporal entities. Moreover, due to the lack of a unique descriptive model for BCI systems, there is not a standard way to define the structure and the timing of a BCI experimental session among different research groups and there is also great discordance on the meaning of the most common terms dealing with BCI, such as trial, run and session. The aim of this paper is to provide a unified modeling language (UML) implementation of BCI systems through a unique dynamic model which is able to describe the main protocols defined in the literature (P300, mu-rhythms, SCP, SSVEP, fMRI) and demonstrates to be reasonable and adjustable according to different requirements. This model includes a set of definitions of the typical entities encountered in a BCI, diagrams which explain the structural correlations among them and a detailed description of the timing of a trial. This last represents an innovation with respect to the models already proposed in the literature. The UML documentation and the possibility of adapting this model to the different BCI systems built to date, make it a basis for the implementation of new systems and a mean for the unification and dissemination of resources. The model with all the diagrams and definitions reported in the paper are the core of the body language framework, a free set of routines and tools for the implementation, optimization and delivery of cross-platform BCI systems.

  7. Quantitative rubber sheet models of gravitation wells using Spandex

    NASA Astrophysics Data System (ADS)

    White, Gary

    2008-04-01

    Long a staple of introductory treatments of general relativity, the rubber sheet model exhibits Wheeler's concise summary---``Matter tells space-time how to curve and space-time tells matter how to move''---very nicely. But what of the quantitative aspects of the rubber sheet model: how far can the analogy be pushed? We show^1 that when a mass M is suspended from the center of an otherwise unstretched elastic sheet affixed to a circular boundary it exhibits a distortion far from the center given by h = A*(M*r^2)^1/3 . Here, as might be expected, h and r are the vertical and axial distances from the center, but this result is not the expected logarithmic form of 2-D solutions to LaPlace's equation (the stretched drumhead). This surprise has a natural explanation and is confirmed experimentally with Spandex as the medium, and its consequences for general rubber sheet models are pursued. ^1``The shape of `the Spandex' and orbits upon its surface'', American Journal of Physics, 70, 48-52 (2002), G. D. White and M. Walker. See also the comment by Don S. Lemons and T. C. Lipscombe, also in AJP, 70, 1056-1058 (2002).

  8. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  9. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  10. The importance of topography-controlled sub-grid process heterogeneity and semi-quantitative prior constraints in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus

    2016-03-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19

  11. Experimentally validated quantitative linear model for the device physics of elastomeric microfluidic valves

    NASA Astrophysics Data System (ADS)

    Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French

    2007-03-01

    A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.

  12. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  13. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    PubMed

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  14. Modelling of occupational respirable crystalline silica exposure for quantitative exposure assessment in community-based case-control studies.

    PubMed

    Peters, Susan; Vermeulen, Roel; Portengen, Lützen; Olsson, Ann; Kendzia, Benjamin; Vincent, Raymond; Savary, Barbara; Lavoué, Jérôme; Cavallo, Domenico; Cattaneo, Andrea; Mirabelli, Dario; Plato, Nils; Fevotte, Joelle; Pesch, Beate; Brüning, Thomas; Straif, Kurt; Kromhout, Hans

    2011-11-01

    We describe an empirical model for exposure to respirable crystalline silica (RCS) to create a quantitative job-exposure matrix (JEM) for community-based studies. Personal measurements of exposure to RCS from Europe and Canada were obtained for exposure modelling. A mixed-effects model was elaborated, with region/country and job titles as random effect terms. The fixed effect terms included year of measurement, measurement strategy (representative or worst-case), sampling duration (minutes) and a priori exposure intensity rating for each job from an independently developed JEM (none, low, high). 23,640 personal RCS exposure measurements, covering a time period from 1976 to 2009, were available for modelling. The model indicated an overall downward time trend in RCS exposure levels of -6% per year. Exposure levels were higher in the UK and Canada, and lower in Northern Europe and Germany. Worst-case sampling was associated with higher reported exposure levels and an increase in sampling duration was associated with lower reported exposure levels. Highest predicted RCS exposure levels in the reference year (1998) were for chimney bricklayers (geometric mean 0.11 mg m(-3)), monument carvers and other stone cutters and carvers (0.10 mg m(-3)). The resulting model enables us to predict time-, job-, and region/country-specific exposure levels of RCS. These predictions will be used in the SYNERGY study, an ongoing pooled multinational community-based case-control study on lung cancer.

  15. A Network Neuroscience of Human Learning: Potential To Inform Quantitative Theories of Brain and Behavior

    PubMed Central

    Bassett, Danielle S.; Mattar, Marcelo G.

    2017-01-01

    Humans adapt their behavior to their external environment in a process often facilitated by learning. Efforts to describe learning empirically can be complemented by quantitative theories that map changes in neurophysiology to changes in behavior. In this review we highlight recent advances in network science that offer a sets of tools and a general perspective that may be particularly useful in understanding types of learning that are supported by distributed neural circuits. We describe recent applications of these tools to neuroimaging data that provide unique insights into adaptive neural processes, the attainment of knowledge, and the acquisition of new skills, forming a network neuroscience of human learning. While promising, the tools have yet to be linked to the well-formulated models of behavior that are commonly utilized in cognitive psychology. We argue that continued progress will require the explicit marriage of network approaches to neuroimaging data and quantitative models of behavior. PMID:28259554

  16. A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior.

    PubMed

    Bassett, Danielle S; Mattar, Marcelo G

    2017-04-01

    Humans adapt their behavior to their external environment in a process often facilitated by learning. Efforts to describe learning empirically can be complemented by quantitative theories that map changes in neurophysiology to changes in behavior. In this review we highlight recent advances in network science that offer a sets of tools and a general perspective that may be particularly useful in understanding types of learning that are supported by distributed neural circuits. We describe recent applications of these tools to neuroimaging data that provide unique insights into adaptive neural processes, the attainment of knowledge, and the acquisition of new skills, forming a network neuroscience of human learning. While promising, the tools have yet to be linked to the well-formulated models of behavior that are commonly utilized in cognitive psychology. We argue that continued progress will require the explicit marriage of network approaches to neuroimaging data and quantitative models of behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Spatiotemporal Characterization of a Fibrin Clot Using Quantitative Phase Imaging

    PubMed Central

    Gannavarpu, Rajshekhar; Bhaduri, Basanta; Tangella, Krishnarao; Popescu, Gabriel

    2014-01-01

    Studying the dynamics of fibrin clot formation and its morphology is an important problem in biology and has significant impact for several scientific and clinical applications. We present a label-free technique based on quantitative phase imaging to address this problem. Using quantitative phase information, we characterized fibrin polymerization in real-time and present a mathematical model describing the transition from liquid to gel state. By exploiting the inherent optical sectioning capability of our instrument, we measured the three-dimensional structure of the fibrin clot. From this data, we evaluated the fractal nature of the fibrin network and extracted the fractal dimension. Our non-invasive and speckle-free approach analyzes the clotting process without the need for external contrast agents. PMID:25386701

  18. SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.

    PubMed

    Weight, Michael D; Harpending, Henry

    2017-01-01

    The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.

  19. Quantitative assessment of computational models for retinotopic map formation

    PubMed Central

    Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2014-01-01

    ABSTRACT Molecular and activity‐based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity‐based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2‐EphA3 ki/ki, Isl2‐EphA3 ki/+, ephrin‐A2,A3,A5 triple knock‐out (TKO), and Math5 −/− (Atoh7). Two models successfully reproduced the extent of the Math5 −/− anteromedial projection, but only one of those could account for the collapse point in Isl2‐EphA3 ki/+. The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin‐A2,A3,A5 TKO phenotype, suggesting either an incomplete knock‐out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. © 2014 Wiley Periodicals, Inc. Develop Neurobiol 75: 641–666, 2015 PMID:25367067

  20. Compilation of a near-infrared library for the construction of quantitative models of amoxicillin and potassium clavulanate oral dosage forms

    NASA Astrophysics Data System (ADS)

    Zou, Wen-bo; Chong, Xiao-meng; Wang, Yan; Hu, Chang-qin

    2018-05-01

    The accuracy of NIR quantitative models depends on calibration samples with concentration variability. Conventional sample collecting methods have some shortcomings especially the time-consuming which remains a bottleneck in the application of NIR models for Process Analytical Technology (PAT) control. A study was performed to solve the problem of sample selection collection for construction of NIR quantitative models. Amoxicillin and potassium clavulanate oral dosage forms were used as examples. The aim was to find a normal approach to rapidly construct NIR quantitative models using an NIR spectral library based on the idea of a universal model [2021]. The NIR spectral library of amoxicillin and potassium clavulanate oral dosage forms was defined and consisted of spectra of 377 batches of samples produced by 26 domestic pharmaceutical companies, including tablets, dispersible tablets, chewable tablets, oral suspensions, and granules. The correlation coefficient (rT) was used to indicate the similarities of the spectra. The samples’ calibration sets were selected from a spectral library according to the median rT of the samples to be analyzed. The rT of the samples selected was close to the median rT. The difference in rT of those samples was 1.0% to 1.5%. We concluded that sample selection is not a problem when constructing NIR quantitative models using a spectral library versus conventional methods of determining universal models. The sample spectra with a suitable concentration range in the NIR models were collected quickly. In addition, the models constructed through this method were more easily targeted.

  1. A model describing intra-granular fission gas behaviour in oxide fuel for advanced engineering tools

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Pastore, G.; Barani, T.; Magni, A.; Luzzi, L.; Van Uffelen, P.; Pitts, S. A.; Alfonsi, A.; Hales, J. D.

    2018-04-01

    The description of intra-granular fission gas behaviour is a fundamental part of any model for the prediction of fission gas release and swelling in nuclear fuel. In this work we present a model describing the evolution of intra-granular fission gas bubbles in terms of bubble number density and average size, coupled to gas release to grain boundaries. The model considers the fundamental processes of single gas atom diffusion, gas bubble nucleation, re-solution and gas atom trapping at bubbles. The model is derived from a detailed cluster dynamics formulation, yet it consists of only three differential equations in its final form; hence, it can be efficiently applied in engineering fuel performance codes while retaining a physical basis. We discuss improvements relative to previous single-size models for intra-granular bubble evolution. We validate the model against experimental data, both in terms of bubble number density and average bubble radius. Lastly, we perform an uncertainty and sensitivity analysis by propagating the uncertainties in the parameters to model results.

  2. A standardised graphic method for describing data privacy frameworks in primary care research using a flexible zone model.

    PubMed

    Kuchinke, Wolfgang; Ohmann, Christian; Verheij, Robert A; van Veen, Evert-Ben; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C

    2014-12-01

    To develop a model describing core concepts and principles of data flow, data privacy and confidentiality, in a simple and flexible way, using concise process descriptions and a diagrammatic notation applied to research workflow processes. The model should help to generate robust data privacy frameworks for research done with patient data. Based on an exploration of EU legal requirements for data protection and privacy, data access policies, and existing privacy frameworks of research projects, basic concepts and common processes were extracted, described and incorporated into a model with a formal graphical representation and a standardised notation. The Unified Modelling Language (UML) notation was enriched by workflow and own symbols to enable the representation of extended data flow requirements, data privacy and data security requirements, privacy enhancing techniques (PET) and to allow privacy threat analysis for research scenarios. Our model is built upon the concept of three privacy zones (Care Zone, Non-care Zone and Research Zone) containing databases, data transformation operators, such as data linkers and privacy filters. Using these model components, a risk gradient for moving data from a zone of high risk for patient identification to a zone of low risk can be described. The model was applied to the analysis of data flows in several general clinical research use cases and two research scenarios from the TRANSFoRm project (e.g., finding patients for clinical research and linkage of databases). The model was validated by representing research done with the NIVEL Primary Care Database in the Netherlands. The model allows analysis of data privacy and confidentiality issues for research with patient data in a structured way and provides a framework to specify a privacy compliant data flow, to communicate privacy requirements and to identify weak points for an adequate implementation of data privacy. Copyright © 2014 Elsevier Ireland Ltd. All rights

  3. Determining absolute protein numbers by quantitative fluorescence microscopy.

    PubMed

    Verdaasdonk, Jolien Suzanne; Lawrimore, Josh; Bloom, Kerry

    2014-01-01

    Biological questions are increasingly being addressed using a wide range of quantitative analytical tools to examine protein complex composition. Knowledge of the absolute number of proteins present provides insights into organization, function, and maintenance and is used in mathematical modeling of complex cellular dynamics. In this chapter, we outline and describe three microscopy-based methods for determining absolute protein numbers--fluorescence correlation spectroscopy, stepwise photobleaching, and ratiometric comparison of fluorescence intensity to known standards. In addition, we discuss the various fluorescently labeled proteins that have been used as standards for both stepwise photobleaching and ratiometric comparison analysis. A detailed procedure for determining absolute protein number by ratiometric comparison is outlined in the second half of this chapter. Counting proteins by quantitative microscopy is a relatively simple yet very powerful analytical tool that will increase our understanding of protein complex composition. © 2014 Elsevier Inc. All rights reserved.

  4. Dispersive models describing mosquitoes’ population dynamics

    NASA Astrophysics Data System (ADS)

    Yamashita, W. M. S.; Takahashi, L. T.; Chapiro, G.

    2016-08-01

    The global incidences of dengue and, more recently, zica virus have increased the interest in studying and understanding the mosquito population dynamics. Understanding this dynamics is important for public health in countries where climatic and environmental conditions are favorable for the propagation of these diseases. This work is based on the study of nonlinear mathematical models dealing with the life cycle of the dengue mosquito using partial differential equations. We investigate the existence of traveling wave solutions using semi-analytical method combining dynamical systems techniques and numerical integration. Obtained solutions are validated through numerical simulations using finite difference schemes.

  5. Quantitative structure-property relationship modeling of remote liposome loading of drugs.

    PubMed

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-06-10

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a data set including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and 5-fold external validation. The external prediction accuracy for binary models was as high as 91-96%; for continuous models the mean coefficient R(2) for regression between predicted versus observed values was 0.76-0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Survival Prediction in Pancreatic Ductal Adenocarcinoma by Quantitative Computed Tomography Image Analysis.

    PubMed

    Attiyeh, Marc A; Chakraborty, Jayasree; Doussot, Alexandre; Langdon-Embry, Liana; Mainarich, Shiana; Gönen, Mithat; Balachandran, Vinod P; D'Angelica, Michael I; DeMatteo, Ronald P; Jarnagin, William R; Kingham, T Peter; Allen, Peter J; Simpson, Amber L; Do, Richard K

    2018-04-01

    Pancreatic cancer is a highly lethal cancer with no established a priori markers of survival. Existing nomograms rely mainly on post-resection data and are of limited utility in directing surgical management. This study investigated the use of quantitative computed tomography (CT) features to preoperatively assess survival for pancreatic ductal adenocarcinoma (PDAC) patients. A prospectively maintained database identified consecutive chemotherapy-naive patients with CT angiography and resected PDAC between 2009 and 2012. Variation in CT enhancement patterns was extracted from the tumor region using texture analysis, a quantitative image analysis tool previously described in the literature. Two continuous survival models were constructed, with 70% of the data (training set) using Cox regression, first based only on preoperative serum cancer antigen (CA) 19-9 levels and image features (model A), and then on CA19-9, image features, and the Brennan score (composite pathology score; model B). The remaining 30% of the data (test set) were reserved for independent validation. A total of 161 patients were included in the analysis. Training and test sets contained 113 and 48 patients, respectively. Quantitative image features combined with CA19-9 achieved a c-index of 0.69 [integrated Brier score (IBS) 0.224] on the test data, while combining CA19-9, imaging, and the Brennan score achieved a c-index of 0.74 (IBS 0.200) on the test data. We present two continuous survival prediction models for resected PDAC patients. Quantitative analysis of CT texture features is associated with overall survival. Further work includes applying the model to an external dataset to increase the sample size for training and to determine its applicability.

  7. The mathematical model that describes the periodic spouting of a geyser induced by boiling

    NASA Astrophysics Data System (ADS)

    Kagami, Hiroyuki

    2017-04-01

    We have derived and modified the dynamical model of a geyser induced by gas inflow and regular or irregular spouting dynamics of geysers induced by gas inflow has been reproduced by the model. On the other hand, though we have derived the dynamical model of a geyser induced by boiling, periodic change between the spouting state and the pause state has not been adequately modeled by the model. In this connection, concerning a geyser induced by gas inflow we have proposed the model as described below. Because pressure in the spouting tube decreases obeying to the Bernoulli's theorem when the spouting state begins and water in the spouting tube begins to flow, inflow of groundwater into the spouting tube occurs. When the amount of this inflow reaches a certain amount, the spouting state transforms to the pause state. In this study, by applying this idea to the dynamical model of a geyser induced by boiling, the periodic change between the spouting state and the pause state could be reappeared. As a result, the whole picture of the spouting mechanism of a geyser induced by boiling became clear. This research results would give hints on engineering repair in order to prevent the weakening or the depletion of the geyser. And this study would be also useful for protection of geysers as tourism and environmental resources.

  8. Quantitative Decision Support Requires Quantitative User Guidance

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  9. Quantitative model of the growth of floodplains by vertical accretion

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2000-01-01

    A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.

  10. Quantitative Acoustic Model for Adhesion Evaluation of Pmma/silicon Film Structures

    NASA Astrophysics Data System (ADS)

    Ju, H. S.; Tittmann, B. R.

    2010-02-01

    A Poly-methyl-methacrylate (PMMA) film on a silicon substrate is a main structure for photolithography in semiconductor manufacturing processes. This paper presents a potential of scanning acoustic microscopy (SAM) for nondestructive evaluation of the PMMA/Si film structure, whose adhesion failure is commonly encountered during the fabrication and post-fabrication processes. A physical model employing a partial discontinuity in displacement is developed for rigorously quantitative evaluation of the interfacial weakness. The model is implanted to the matrix method for the surface acoustic wave (SAW) propagation in anisotropic media. Our results show that variations in the SAW velocity and reflectance are predicted to show their sensitivity to the adhesion condition. Experimental results by the v(z) technique and SAW velocity reconstruction verify the prediction.

  11. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  12. Spatiotemporal microbiota dynamics from quantitative in vitro and in silico models of the gut

    NASA Astrophysics Data System (ADS)

    Hwa, Terence

    The human gut harbors a dynamic microbial community whose composition bears great importance for the health of the host. Here, we investigate how colonic physiology impacts bacterial growth behaviors, which ultimately dictate the gut microbiota composition. Combining measurements of bacterial growth physiology with analysis of published data on human physiology into a quantitative modeling framework, we show how hydrodynamic forces in the colon, in concert with other physiological factors, determine the abundances of the major bacterial phyla in the gut. Our model quantitatively explains the observed variation of microbiota composition among healthy adults, and predicts colonic water absorption (manifested as stool consistency) and nutrient intake to be two key factors determining this composition. The model further reveals that both factors, which have been identified in recent correlative studies, exert their effects through the same mechanism: changes in colonic pH that differentially affect the growth of different bacteria. Our findings show that a predictive and mechanistic understanding of microbial ecology in the human gut is possible, and offer the hope for the rational design of intervention strategies to actively control the microbiota. This work is supported by the Bill and Melinda Gates Foundation.

  13. Advances in Quantitative Proteomics of Microbes and Microbial Communities

    NASA Astrophysics Data System (ADS)

    Waldbauer, J.; Zhang, L.; Rizzo, A. I.

    2015-12-01

    Quantitative measurements of gene expression are key to developing a mechanistic, predictive understanding of how microbial metabolism drives many biogeochemical fluxes and responds to environmental change. High-throughput RNA-sequencing can afford a wealth of information about transcript-level expression patterns, but it is becoming clear that expression dynamics are often very different at the protein level where biochemistry actually occurs. These divergent dynamics between levels of biological organization necessitate quantitative proteomic measurements to address many biogeochemical questions. The protein-level expression changes that underlie shifts in the magnitude, or even the direction, of metabolic and biogeochemical fluxes can be quite subtle and test the limits of current quantitative proteomics techniques. Here we describe methodologies for high-precision, whole-proteome quantification that are applicable to both model organisms of biogeochemical interest that may not be genetically tractable, and to complex community samples from natural environments. Employing chemical derivatization of peptides with multiple isotopically-coded tags, this strategy is rapid and inexpensive, can be implemented on a wide range of mass spectrometric instrumentation, and is relatively insensitive to chromatographic variability. We demonstrate the utility of this quantitative proteomics approach in application to both isolates and natural communities of sulfur-metabolizing and photosynthetic microbes.

  14. Quantitative and Qualitative Differences in Morphological Traits Revealed between Diploid Fragaria Species

    PubMed Central

    SARGENT, DANIEL J.; GEIBEL, M.; HAWKINS, J. A.; WILKINSON, M. J.; BATTEY, N. H.; SIMPSON, D. W.

    2004-01-01

    • Background and Aims The aims of this investigation were to highlight the qualitative and quantitative diversity apparent between nine diploid Fragaria species and produce interspecific populations segregating for a large number of morphological characters suitable for quantitative trait loci analysis. • Methods A qualitative comparison of eight described diploid Fragaria species was performed and measurements were taken of 23 morphological traits from 19 accessions including eight described species and one previously undescribed species. A principal components analysis was performed on 14 mathematically unrelated traits from these accessions, which partitioned the species accessions into distinct morphological groups. Interspecific crosses were performed with accessions of species that displayed significant quantitative divergence and, from these, populations that should segregate for a range of quantitative traits were raised. • Key Results Significant differences between species were observed for all 23 morphological traits quantified and three distinct groups of species accessions were observed after the principal components analysis. Interspecific crosses were performed between these groups, and F2 and backcross populations were raised that should segregate for a range of morphological characters. In addition, the study highlighted a number of distinctive morphological characters in many of the species studied. • Conclusions Diploid Fragaria species are morphologically diverse, yet remain highly interfertile, making the group an ideal model for the study of the genetic basis of phenotypic differences between species through map-based investigation using quantitative trait loci. The segregating interspecific populations raised will be ideal for such investigations and could also provide insights into the nature and extent of genome evolution within this group. PMID:15469944

  15. Quantitative Chemical Exchange Saturation Transfer MRI of Intervertebral Disc in a Porcine Model

    PubMed Central

    Zhou, Zhengwei; Bez, Maxim; Tawackoli, Wafa; Giaconi, Joseph; Sheyn, Dmitriy; de Mel, Sandra; Maya, Marcel M.; Pressman, Barry D.; Gazit, Zulma; Pelled, Gadi; Gazit, Dan; Li, Debiao

    2017-01-01

    Purpose Previous studies have associated low pH in interver-tebral discs (IVDs) with discogenic back pain. The purpose of this study was to determine whether quantitative CEST (qCEST) MRI can be used to detect pH changes in IVDs in vivo. Methods The exchange rate ksw between glycosaminoglycan (GAG) protons and water protons was determined from qCEST analysis. Its dependence on pH value was investigated in GAG phantoms with varying pH and concentrations. The relationship between ksw and pH was studied further in vivo in a porcine model on a 3T MR scanner and validated using a pH meter. Sodium lactate was injected into the IVDs to induce various pH values within the discs ranging from 5 to 7. Results Phantom and animal results revealed that ksw measured using qCEST MRI is highly correlated with pH level. In the animal studies, the relationship can be described as ksw =9.2 × 106 × 10−pH + 196.9, R2 = 0.7883. Conclusion The exchange rate between GAG and water protons determined from qCEST MRI is closely correlated with pH value. This technique has the potential to noninvasively measure pH in the IVDs of patients with discogenic pain. PMID:27670140

  16. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  17. A physiology-based model describing heterogeneity in glucose metabolism: the core of the Eindhoven Diabetes Education Simulator (E-DES).

    PubMed

    Maas, Anne H; Rozendaal, Yvonne J W; van Pul, Carola; Hilbers, Peter A J; Cottaar, Ward J; Haak, Harm R; van Riel, Natal A W

    2015-03-01

    Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. © 2014 Diabetes Technology Society.

  18. Quantitative dual-probe microdialysis: mathematical model and analysis.

    PubMed

    Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles

    2002-04-01

    Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis.

  19. A model to describe the surface gradient-nanograin formation and property of friction stir processed laser Co-Cr-Ni-Mo alloy

    NASA Astrophysics Data System (ADS)

    Li, Ruidi; Yuan, Tiechui; Qiu, Zili

    2014-07-01

    A gradient-nanograin surface layer of Co-base alloy was prepared by friction stir processing (FSP) of laser-clad coating in this work. However, it is lack of a quantitatively function relationship between grain refinement and FSP conditions. Based on this, an analytic model is derived for the correlations between carbide size, hardness and rotary speed, layer depth during in-situ FSP of laser-clad Co-Cr-Ni-Mo alloy. The model is based on the principle of typical plastic flow in friction welding and dynamic recrystallization. The FSP experiment for modification of laser-clad Co-based alloy was conducted and its gradient nanograin and hardness were characterized. It shows that the model is consistent with experimental results.

  20. Another Curriculum Requirement? Quantitative Reasoning in Economics: Some First Steps

    ERIC Educational Resources Information Center

    O'Neill, Patrick B.; Flynn, David T.

    2013-01-01

    In this paper, we describe first steps toward focusing on quantitative reasoning in an intermediate microeconomic theory course. We find student attitudes toward quantitative aspects of economics improve over the duration of the course (as we would hope). Perhaps more importantly, student attitude toward quantitative reasoning improves, in…

  1. Quantitative Courses in a Liberal Education Program: A Case Study

    ERIC Educational Resources Information Center

    Wismath, Shelly L.; Mackay, D. Bruce

    2012-01-01

    This essay argues for the importance of quantitative reasoning skills as part of a liberal education and describes the successful introduction of a mathematics-based quantitative skills course at a small Canadian university. Today's students need quantitative problem-solving skills, to function as adults, professionals, consumers, and citizens in…

  2. A Reactive-Transport Model Describing Methanogen Growth and Methane Production in Diffuse Flow Vents at Axial Seamount

    NASA Astrophysics Data System (ADS)

    Algar, C. K.

    2015-12-01

    Hydrogenotrophic methanogenesis is an important mode of metabolism in deep-sea hydrothermal vents. Diffuse vent fluids often show a depletion in hydrogen with a corresponding increase in methane relative to pure-mixing of end member fluid and seawater, and genomic surveys show an enrichment in genetic sequences associated with known methanogens. However, because we cannot directly sample the subseafloor habitat where these organisms are living, constraining the size and activity of these populations remains a challenge and limits our ability to quantify the role they play in vent biogeochemistry. Reactive-transport modeling may provide a useful tool for approaching this problem. Here we present a reactive-transport model describing methane production along the flow-path of hydrothermal fluid from its high temperature end-member to diffuse venting at the seafloor. The model is set up to reflect conditions at several diffuse vents in the Axial Seamount. The model describes the growth of the two dominant thermophilic methanogens, Methanothermococcus and Methanocaldococcus, observed at Axial seamount. Monod and Arrhenius constants for Methanothermococcus thermolithotrophicus and Methanocaldococcus jannaschii were obtained for the model using chemostat and bottle experiments at varying temperatures. The model is used to investigate the influence of different mixing regimes on the subseafloor populations of these methanogens. By varying the model flow path length and subseafloor cell concentrations, and fitting to observed hydrogen and methane concentrations in the venting fluid, the subseafloor biomass, fluid residence time, and methane production rate can be constrained.

  3. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    PubMed

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  4. Using Qualitative Hazard Analysis to Guide Quantitative Safety Analysis

    NASA Technical Reports Server (NTRS)

    Shortle, J. F.; Allocco, M.

    2005-01-01

    Quantitative methods can be beneficial in many types of safety investigations. However, there are many difficulties in using quantitative m ethods. Far example, there may be little relevant data available. This paper proposes a framework for using quantitative hazard analysis to prioritize hazard scenarios most suitable for quantitative mziysis. The framework first categorizes hazard scenarios by severity and likelihood. We then propose another metric "modeling difficulty" that desc ribes the complexity in modeling a given hazard scenario quantitatively. The combined metrics of severity, likelihood, and modeling difficu lty help to prioritize hazard scenarios for which quantitative analys is should be applied. We have applied this methodology to proposed concepts of operations for reduced wake separation for airplane operatio ns at closely spaced parallel runways.

  5. Interfacial Mechanisms of Water Vapor Sorption into Cellulose Nanofibril Films as Revealed by Quantitative Models.

    PubMed

    Hakalahti, Minna; Faustini, Marco; Boissière, Cédric; Kontturi, Eero; Tammelin, Tekla

    2017-09-11

    Humidity is an efficient instrument for facilitating changes in local architectures of two-dimensional surfaces assembled from nanoscaled biomaterials. Here, complementary surface-sensitive methods are used to collect explicit and precise experimental evidence on the water vapor sorption into (2,2,6,6-tetramethylpiperidin-1-yl)oxyl (TEMPO) oxidized cellulose nanofibril (CNF) thin film over the relative humidity (RH) range from 0 to 97%. Changes in thickness and mass of the film due to water vapor uptake are tracked using spectroscopic ellipsometry and quartz crystal microbalance with dissipation monitoring, respectively. Experimental data is evaluated by the quantitative Langmuir/Flory-Huggins/clustering model and the Brunauer-Emmett-Teller model. The isotherms coupled with the quantitative models unveil distinct regions of predominant sorption modes: specific sorption of water molecules below 10% RH, multilayer build-up between 10 to 75% RH, and clustering of water molecules above 75% RH. The study reveals the sorption mechanisms underlying the well-known water uptake behavior of TEMPO oxidized CNF directly at the gas-solid interface.

  6. Answering Critical Questions Using Quantitative Data

    ERIC Educational Resources Information Center

    Stage, Frances K.

    2007-01-01

    Drawing on the work of several prominent scholars, the author describes the evolution of quantitative critical inquiry and compares this perspective to traditional research paradigms. (Contains 1 table.)

  7. Quantitative Imaging in Cancer Clinical Trials

    PubMed Central

    Yankeelov, Thomas E.; Mankoff, David A.; Schwartz, Lawrence H.; Lieberman, Frank S.; Buatti, John M.; Mountz, James M.; Erickson, Bradley J.; Fennessy, Fiona M.M.; Huang, Wei; Kalpathy-Cramer, Jayashree; Wahl, Richard L.; Linden, Hannah M.; Kinahan, Paul; Zhao, Binsheng; Hylton, Nola M.; Gillies, Robert J.; Clarke, Laurence; Nordstrom, Robert; Rubin, Daniel L.

    2015-01-01

    As anti-cancer therapies designed to target specific molecular pathways have been developed, it has become critical to develop methods to assess the response induced by such agents. While traditional, anatomic CT and MRI exams are useful in many settings, there is increasing evidence that these methods cannot answer the fundamental biological and physiological questions essential for assessment and, eventually, prediction of treatment response in the clinical trial setting, especially in the critical period soon after treatment is initiated. To optimally apply advances in quantitative imaging methods to trials of targeted cancer therapy, new infrastructure improvements are needed that incorporate these emerging techniques into the settings where they are most likely to have impact. In this review, we first elucidate the needs for therapeutic response assessment in the era of molecularly targeted therapy and describe how quantitative imaging can most effectively provide scientifically and clinically relevant data. We then describe the tools and methods required to apply quantitative imaging and provide concrete examples of work making these advances practically available for routine application in clinical trials. We conclude by proposing strategies to surmount barriers to wider incorporation of these quantitative imaging methods into clinical trials and, eventually, clinical practice. Our goal is to encourage and guide the oncology community to deploy standardized quantitative imaging techniques in clinical trials to further personalize care for cancer patients, and to provide a more efficient path for the development of improved targeted therapies. PMID:26773162

  8. Digital Materials - Evaluation of the Possibilities of using Selected Hyperelastic Models to Describe Constitutive Relations

    NASA Astrophysics Data System (ADS)

    Mańkowski, J.; Lipnicki, J.

    2017-08-01

    The authors tried to identify the parameters of numerical models of digital materials, which are a kind of composite resulting from the manufacture of the product in 3D printers. With the arrangement of several heads of the printer, the new material can result from mixing of materials with radically different properties, during the process of producing single layer of the product. The new material has properties dependent on the base materials properties and their proportions. Digital materials tensile characteristics are often non-linear and qualify to be described by hyperelastic materials models. The identification was conducted based on the results of tensile tests models, its various degrees coefficients of the polynomials to various degrees coefficients of the polynomials. The Drucker's stability criterion was also examined. Fourteen different materials were analyzed.

  9. Performance of soil particle-size distribution models for describing deposited soils adjacent to constructed dams in the China Loess Plateau

    NASA Astrophysics Data System (ADS)

    Zhao, Pei; Shao, Ming-an; Horton, Robert

    2011-02-01

    Soil particle-size distributions (PSD) have been used to estimate soil hydraulic properties. Various parametric PSD models have been proposed to describe the soil PSD from sparse experimental data. It is important to determine which PSD model best represents specific soils. Fourteen PSD models were examined in order to determine the best model for representing the deposited soils adjacent to dams in the China Loess Plateau; these were: Skaggs (S-1, S-2, and S-3), fractal (FR), Jaky (J), Lima and Silva (LS), Morgan (M), Gompertz (G), logarithm (L), exponential (E), log-exponential (LE), Weibull (W), van Genuchten type (VG) as well as Fredlund (F) models. Four-hundred and eighty samples were obtained from soils deposited in the Liudaogou catchment. The coefficient of determination (R 2), the Akaike's information criterion (AIC), and the modified AIC (mAIC) were used. Based upon R 2 and AIC, the three- and four-parameter models were both good at describing the PSDs of deposited soils, and the LE, FR, and E models were the poorest. However, the mAIC in conjunction with R 2 and AIC results indicated that the W model was optimum for describing PSD of the deposited soils for emphasizing the effect of parameter number. This analysis was also helpful for finding out which model is the best one. Our results are applicable to the China Loess Plateau.

  10. Modelling the Kampungkota: A quantitative approach in defining Indonesian informal settlements

    NASA Astrophysics Data System (ADS)

    Anindito, D. B.; Maula, F. K.; Akbar, R.

    2018-02-01

    Bandung City is home to 2.5 million inhabitants, some of which are living in slums and squatter. However, the terms conveying this type of housing is not adequate to describe that of Indonesian called as kampungkota. Several studies suggest various variables in constituting kampungkota qualitatively. This study delves to define kampungkota in a quantitative manner, using the characteristics of slums and squatter. The samples for this study are 151 villages (kelurahan) in Bandung City. Ordinary Least Squares, Geographically Weighted Regression, and Spatial Cluster and Outlier Analysis are employed. It is suggested that kampungkota may have distinguished variables regarding to its location. As kampungkota may be smaller than administrative area of kelurahan, it can develop beyond the jurisdiction of kelurahan, as indicated by the clustering pattern of kampungkota.

  11. Melanoma screening: Informing public health policy with quantitative modelling.

    PubMed

    Gilmore, Stephen

    2017-01-01

    Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved) by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in 1982 has resulted in

  12. Early Childhood Teacher Preparation: A Tale of Authors and Multimedia, A Model of Technology Integration Described.

    ERIC Educational Resources Information Center

    Wetzel, Keith; McLean, S. V.

    1997-01-01

    Describes collaboration of two teacher educators, one in early childhood language arts and one in computers in education. Discusses advantages and disadvantages and extensions of this model, including how a college-wide survey revealed that students in teamed courses are better prepared to teach and learn with technology. (DR)

  13. An overview of quantitative approaches in Gestalt perception.

    PubMed

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Brownian Motion at Lipid Membranes: A Comparison of Hydrodynamic Models Describing and Experiments Quantifying Diffusion within Lipid Bilayers.

    PubMed

    Block, Stephan

    2018-05-22

    The capability of lipid bilayers to exhibit fluid-phase behavior is a fascinating property, which enables, for example, membrane-associated components, such as lipids (domains) and transmembrane proteins, to diffuse within the membrane. These diffusion processes are of paramount importance for cells, as they are for example involved in cell signaling processes or the recycling of membrane components, but also for recently developed analytical approaches, which use differences in the mobility for certain analytical purposes, such as in-membrane purification of membrane proteins or the analysis of multivalent interactions. Here, models describing the Brownian motion of membrane inclusions (lipids, peptides, proteins, and complexes thereof) in model bilayers (giant unilamellar vesicles, black lipid membranes, supported lipid bilayers) are summarized and model predictions are compared with the available experimental data, thereby allowing for evaluating the validity of the introduced models. It will be shown that models describing the diffusion in freestanding (Saffman-Delbrück and Hughes-Pailthorpe-White model) and supported bilayers (the Evans-Sackmann model) are well supported by experiments, though only few experimental studies have been published so far for the latter case, calling for additional tests to reach the same level of experimental confirmation that is currently available for the case of freestanding bilayers.

  15. A Second-Generation Device for Automated Training and Quantitative Behavior Analyses of Molecularly-Tractable Model Organisms

    PubMed Central

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L.; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  16. Quantitative genetic models of sexual selection by male choice.

    PubMed

    Nakahashi, Wataru

    2008-09-01

    There are many examples of male mate choice for female traits that tend to be associated with high fertility. I develop quantitative genetic models of a female trait and a male preference to show when such a male preference can evolve. I find that a disagreement between the fertility maximum and the viability maximum of the female trait is necessary for directional male preference (preference for extreme female trait values) to evolve. Moreover, when there is a shortage of available male partners or variance in male nongenetic quality, strong male preference can evolve. Furthermore, I also show that males evolve to exhibit a stronger preference for females that are more feminine (less resemblance to males) than the average female when there is a sexual dimorphism caused by fertility selection which acts only on females.

  17. Targeted methods for quantitative analysis of protein glycosylation

    PubMed Central

    Goldman, Radoslav; Sanda, Miloslav

    2018-01-01

    Quantification of proteins by LC-MS/MS-MRM has become a standard method with broad projected clinical applicability. MRM quantification of protein modifications is, however, far less utilized, especially in the case of glycoproteins. This review summarizes current methods for quantitative analysis of protein glycosylation with a focus on MRM methods. We describe advantages of this quantitative approach, analytical parameters that need to be optimized to achieve reliable measurements, and point out the limitations. Differences between major classes of N- and O-glycopeptides are described and class-specific glycopeptide assays are demonstrated. PMID:25522218

  18. A poroelastic model describing nutrient transport and cell stresses within a cyclically strained collagen hydrogel.

    PubMed

    Vaughan, Benjamin L; Galie, Peter A; Stegemann, Jan P; Grotberg, James B

    2013-11-05

    In the creation of engineered tissue constructs, the successful transport of nutrients and oxygen to the contained cells is a significant challenge. In highly porous scaffolds subject to cyclic strain, the mechanical deformations can induce substantial fluid pressure gradients, which affect the transport of solutes. In this article, we describe a poroelastic model to predict the solid and fluid mechanics of a highly porous hydrogel subject to cyclic strain. The model was validated by matching the predicted penetration of a bead into the hydrogel from the model with experimental observations and provides insight into nutrient transport. Additionally, the model provides estimates of the wall-shear stresses experienced by the cells embedded within the scaffold. These results provide insight into the mechanics of and convective nutrient transport within a cyclically strained hydrogel, which could lead to the improved design of engineered tissues. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  19. xTract: software for characterizing conformational changes of protein complexes by quantitative cross-linking mass spectrometry.

    PubMed

    Walzthoeni, Thomas; Joachimiak, Lukasz A; Rosenberger, George; Röst, Hannes L; Malmström, Lars; Leitner, Alexander; Frydman, Judith; Aebersold, Ruedi

    2015-12-01

    Chemical cross-linking in combination with mass spectrometry generates distance restraints of amino acid pairs in close proximity on the surface of native proteins and protein complexes. In this study we used quantitative mass spectrometry and chemical cross-linking to quantify differences in cross-linked peptides obtained from complexes in spatially discrete states. We describe a generic computational pipeline for quantitative cross-linking mass spectrometry consisting of modules for quantitative data extraction and statistical assessment of the obtained results. We used the method to detect conformational changes in two model systems: firefly luciferase and the bovine TRiC complex. Our method discovers and explains the structural heterogeneity of protein complexes using only sparse structural information.

  20. A physiological pharmacokinetic model describing the disposition of lycopene in healthy men.

    PubMed

    Diwadkar-Navsariwala, Veda; Novotny, Janet A; Gustin, David M; Sosman, Jeffery A; Rodvold, Keith A; Crowell, James A; Stacewicz-Sapuntzakis, Maria; Bowen, Phyllis E

    2003-10-01

    A physiological pharmacokinetic model was developed to describe the disposition of lycopene, delivered as a tomato beverage formulation in five graded doses (10, 30, 60, 90, or 120 mg), for a phase I study in healthy male subjects (five per dose). Blood was collected before dose administration (0 h) and at scheduled intervals until 672 h. Serum concentrations of carotenoids and vitamins were measured by high performance liquid chromatography analysis. The model was comprised of seven compartments: gastrointestinal tract, enterocytes, chylomicrons, plasma lipoproteins, fast-turnover liver, slow-turnover tissues, and a delay compartment before the enterocytes. As predicted, the percent absorption at the 10 mg dose (33.9 +/- 8.1%) was significantly greater than at the higher doses; however, the amount of lycopene absorbed (mg) was not statistically different (mean: 4.69 +/- 0.55 mg) between doses, suggesting a possible saturation of absorptive mechanisms. The slow-turnover tissue compartment served as a slow-depleting reservoir for lycopene, and the liver represented the fast-turnover pool. Independent of dose, 80% of the subjects absorbed less than 6 mg of lycopene. This may have important implications for planning clinical trials with pharmacological doses of lycopene in cancer control and prevention if absorption saturation occurs at levels that are already being consumed in the population.

  1. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    NASA Astrophysics Data System (ADS)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  2. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  3. A Quantitative Model of Keyhole Instability Induced Porosity in Laser Welding of Titanium Alloy

    NASA Astrophysics Data System (ADS)

    Pang, Shengyong; Chen, Weidong; Wang, Wen

    2014-06-01

    Quantitative prediction of the porosity defects in deep penetration laser welding has generally been considered as a very challenging task. In this study, a quantitative model of porosity defects induced by keyhole instability in partial penetration CO2 laser welding of a titanium alloy is proposed. The three-dimensional keyhole instability, weld pool dynamics, and pore formation are determined by direct numerical simulation, and the results are compared to prior experimental results. It is shown that the simulated keyhole depth fluctuations could represent the variation trends in the number and average size of pores for the studied process conditions. Moreover, it is found that it is possible to use the predicted keyhole depth fluctuations as a quantitative measure of the average size of porosity. The results also suggest that due to the shadowing effect of keyhole wall humps, the rapid cooling of the surface of the keyhole tip before keyhole collapse could lead to a substantial decrease in vapor pressure inside the keyhole tip, which is suggested to be the mechanism by which shielding gas enters into the porosity.

  4. On the feasibility of quantitative ultrasonic determination of fracture toughness: A literature review

    NASA Technical Reports Server (NTRS)

    Fu, L. S.

    1980-01-01

    The three main topics covered are: (1) fracture toughness and microstructure, (2) quantitative ultrasonic and microstructure; and (3) scattering and related mathematical methods. Literature in these areas is reviewed to give insight to the search of a theoretical foundation for quantitative ultrasonic measurement of fracture toughness. The literature review shows that fracture toughness is inherently related to the microstructure and in particular, it depends upon the spacing of inclusions or second particles and the aspect ratio of second phase particles. There are indications that ultrasonic velocity attenuation measurements can be used to determine fracture toughness. The leads to a review of the mathematical models available in solving boundary value problems related to microstructural factors that govern facture toughness and wave motion. A framework towards the theoretical study for the quantitative determination of fracture toughness is described and suggestions for future research are proposed.

  5. Challenges and perspectives in quantitative NMR.

    PubMed

    Giraudeau, Patrick

    2017-01-01

    This perspective article summarizes, from the author's point of view at the beginning of 2016, the major challenges and perspectives in the field of quantitative NMR. The key concepts in quantitative NMR are first summarized; then, the most recent evolutions in terms of resolution and sensitivity are discussed, as well as some potential future research directions in this field. A particular focus is made on methodologies capable of boosting the resolution and sensitivity of quantitative NMR, which could open application perspectives in fields where the sample complexity and the analyte concentrations are particularly challenging. These include multi-dimensional quantitative NMR and hyperpolarization techniques such as para-hydrogen-induced polarization or dynamic nuclear polarization. Because quantitative NMR cannot be dissociated from the key concepts of analytical chemistry, i.e. trueness and precision, the methodological developments are systematically described together with their level of analytical performance. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors

    PubMed Central

    Shuryak, Igor; Dadachova, Ekaterina

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental

  7. Influence of mom and dad: quantitative genetic models for maternal effects and genomic imprinting.

    PubMed

    Santure, Anna W; Spencer, Hamish G

    2006-08-01

    The expression of an imprinted gene is dependent on the sex of the parent it was inherited from, and as a result reciprocal heterozygotes may display different phenotypes. In contrast, maternal genetic terms arise when the phenotype of an offspring is influenced by the phenotype of its mother beyond the direct inheritance of alleles. Both maternal effects and imprinting may contribute to resemblance between offspring of the same mother. We demonstrate that two standard quantitative genetic models for deriving breeding values, population variances and covariances between relatives, are not equivalent when maternal genetic effects and imprinting are acting. Maternal and imprinting effects introduce both sex-dependent and generation-dependent effects that result in differences in the way additive and dominance effects are defined for the two approaches. We use a simple example to demonstrate that both imprinting and maternal genetic effects add extra terms to covariances between relatives and that model misspecification may over- or underestimate true covariances or lead to extremely variable parameter estimation. Thus, an understanding of various forms of parental effects is essential in correctly estimating quantitative genetic variance components.

  8. Quantitative physiologically based modeling of subjective fatigue during sleep deprivation.

    PubMed

    Fulcher, B D; Phillips, A J K; Robinson, P A

    2010-05-21

    A quantitative physiologically based model of the sleep-wake switch is used to predict variations in subjective fatigue-related measures during total sleep deprivation. The model includes the mutual inhibition of the sleep-active neurons in the hypothalamic ventrolateral preoptic area (VLPO) and the wake-active monoaminergic brainstem populations (MA), as well as circadian and homeostatic drives. We simulate sleep deprivation by introducing a drive to the MA, which we call wake effort, to maintain the system in a wakeful state. Physiologically this drive is proposed to be afferent from the cortex or the orexin group of the lateral hypothalamus. It is hypothesized that the need to exert this effort to maintain wakefulness at high homeostatic sleep pressure correlates with subjective fatigue levels. The model's output indeed exhibits good agreement with existing clinical time series of subjective fatigue-related measures, supporting this hypothesis. Subjective fatigue, adrenaline, and body temperature variations during two 72h sleep deprivation protocols are reproduced by the model. By distinguishing a motivation-dependent orexinergic contribution to the wake-effort drive, the model can be extended to interpret variation in performance levels during sleep deprivation in a way that is qualitatively consistent with existing, clinically derived results. The example of sleep deprivation thus demonstrates the ability of physiologically based sleep modeling to predict psychological measures from the underlying physiological interactions that produce them. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. QSAR, DFT and molecular modeling studies of peptides from HIV-1 to describe their recognition properties by MHC-I.

    PubMed

    Andrade-Ochoa, S; García-Machorro, J; Bello, Martiniano; Rodríguez-Valdez, L M; Flores-Sandoval, C A; Correa-Basurto, J

    2017-08-03

    Human immunodeficiency virus type-1 (HIV-1) has infected more than 40 million people around the world. HIV-1 treatment still has several side effects, and the development of a vaccine, which is another potential option for decreasing human infections, has faced challenges. This work presents a computational study that includes a quantitative structure activity relationship(QSAR) using density functional theory(DFT) for reported peptides to identify the principal quantum mechanics descriptors related to peptide activity. In addition, the molecular recognition properties of these peptides are explored on major histocompatibility complex I (MHC-I) through docking and molecular dynamics (MD) simulations accompanied by the Molecular Mechanics Generalized Born Surface Area (MMGBSA) approach for correlating peptide activity reported elsewhere vs. theoretical peptide affinity. The results show that the carboxylic acid and hydroxyl groups are chemical moieties that have an inverse relationship with biological activity. The number of sulfides, pyrroles and imidazoles from the peptide structure are directly related to biological activity. In addition, the HOMO orbital energy values of the total absolute charge and the Ghose-Crippen molar refractivity of peptides are descriptors directly related to the activity and affinity on MHC-I. Docking and MD simulation studies accompanied by an MMGBSA analysis show that the binding free energy without considering the entropic contribution is energetically favorable for all the complexes. Furthermore, good peptide interaction with the most affinity is evaluated experimentally for three proteins. Overall, this study shows that the combination of quantum mechanics descriptors and molecular modeling studies could help describe the immunogenic properties of peptides from HIV-1.

  10. Quantitative Reasoning in Environmental Science: A Learning Progression

    ERIC Educational Resources Information Center

    Mayes, Robert Lee; Forrester, Jennifer Harris; Christus, Jennifer Schuttlefield; Peterson, Franziska Isabel; Bonilla, Rachel; Yestness, Nissa

    2014-01-01

    The ability of middle and high school students to reason quantitatively within the context of environmental science was investigated. A quantitative reasoning (QR) learning progression was created with three progress variables: quantification act, quantitative interpretation, and quantitative modeling. An iterative research design was used as it…

  11. Stroke onset time estimation from multispectral quantitative magnetic resonance imaging in a rat model of focal permanent cerebral ischemia.

    PubMed

    McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A

    2016-08-01

    Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.

  12. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    NASA Astrophysics Data System (ADS)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-04-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates, for the first time, the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mole N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3 to 32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. The effect of non-constitutive mixotrophy depends on light and affects the ecosystem functioning in terms of annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that different forms of mixotrophy have different impacts on system dynamics and it is thus important to

  13. Quantitative glycomics.

    PubMed

    Orlando, Ron

    2010-01-01

    The ability to quantitatively determine changes is an essential component of comparative glycomics. Multiple strategies are available by which this can be accomplished. These include label-free approaches and strategies where an isotopic label is incorporated into the glycans prior to analysis. The focus of this chapter is to describe each of these approaches while providing insight into their strengths and weaknesses, so that glycomic investigators can make an educated choice of the strategy that is best suited for their particular application.

  14. A Laparoscopic Swine Model of Noncompressible Torso Hemorrhage

    DTIC Science & Technology

    2014-01-01

    Various porcine models of hemorrhage have been developed for civilian and military trauma research. However, the predominant contemporary models lack...significant predictors of mortality. CONCLUSION: This study describes a model of NCTH that reflects clinically relevant physiology in trauma and...uncontrolled hemorrhage. In addition, it quantitatively assesses the role of the swine contractile spleen in the described model. (J Trauma Acute Care Surg

  15. Yeast for Mathematicians: A Ferment of Discovery and Model Competition to Describe Data.

    PubMed

    Lewis, Matthew; Powell, James

    2017-02-01

    In addition to the memorization, algorithmic skills and vocabulary which are the default focus in many mathematics classrooms, professional mathematicians are expected to creatively apply known techniques, construct new mathematical approaches and communicate with and about mathematics. We propose that students can learn these professional, higher-level skills through Laboratory Experiences in Mathematical Biology which put students in the role of mathematics researcher creating mathematics to describe and understand biological data. Here we introduce a laboratory experience centered on yeast (Saccharomyces cerevisiae) growing in a small capped flask with a jar to collect carbon dioxide created during yeast growth and respiration. The lab requires no specialized equipment and can easily be run in the context of a college math class. Students collect data and develop mathematical models to explain the data. To help place instructors in the role of mentor/collaborator (as opposed to jury/judge), we facilitate the lab using model competition judged via Bayesian Information Criterion. This article includes details about the class activity conducted, student examples and pedagogical strategies for success.

  16. Quantitative modeling of the reaction/diffusion kinetics of two-chemistry photopolymers

    NASA Astrophysics Data System (ADS)

    Kowalski, Benjamin Andrew

    Optically driven diffusion in photopolymers is an appealing material platform for a broad range of applications, in which the recorded refractive index patterns serve either as images (e.g. data storage, display holography) or as optical elements (e.g. custom GRIN components, integrated optical devices). A quantitative understanding of the reaction/diffusion kinetics is difficult to obtain directly, but is nevertheless necessary in order to fully exploit the wide array of design freedoms in these materials. A general strategy for characterizing these kinetics is proposed, in which key processes are decoupled and independently measured. This strategy enables prediction of a material's potential refractive index change, solely on the basis of its chemical components. The degree to which a material does not reach this potential reveals the fraction of monomer that has participated in unwanted reactions, reducing spatial resolution and dynamic range. This approach is demonstrated for a model material similar to commercial media, achieving quantitative predictions of index response over three orders of exposure dose (~1 to ~103 mJ cm-2) and three orders of feature size (0.35 to 500 microns). The resulting insights enable guided, rational design of new material formulations with demonstrated performance improvement.

  17. LiveDescribe: Can Amateur Describers Create High-Quality Audio Description?

    ERIC Educational Resources Information Center

    Branje, Carmen J.; Fels, Deborah I.

    2012-01-01

    Introduction: The study presented here evaluated the usability of the audio description software LiveDescribe and explored the acceptance rates of audio description created by amateur describers who used LiveDescribe to facilitate the creation of their descriptions. Methods: Twelve amateur describers with little or no previous experience with…

  18. Training Signaling Pathway Maps to Biochemical Data with Constrained Fuzzy Logic: Quantitative Analysis of Liver Cell Responses to Inflammatory Stimuli

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.

    2011-01-01

    Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212

  19. Quantitative estimation of pesticide-likeness for agrochemical discovery.

    PubMed

    Avram, Sorin; Funar-Timofei, Simona; Borota, Ana; Chennamaneni, Sridhar Rao; Manchala, Anil Kumar; Muresan, Sorel

    2014-12-01

    The design of chemical libraries, an early step in agrochemical discovery programs, is frequently addressed by means of qualitative physicochemical and/or topological rule-based methods. The aim of this study is to develop quantitative estimates of herbicide- (QEH), insecticide- (QEI), fungicide- (QEF), and, finally, pesticide-likeness (QEP). In the assessment of these definitions, we relied on the concept of desirability functions. We found a simple function, shared by the three classes of pesticides, parameterized particularly, for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings. Subsequently, we describe the scoring of each pesticide class by the corresponding quantitative estimate. In a comparative study, we assessed the performance of the scoring functions using extensive datasets of patented pesticides. The hereby-established quantitative assessment has the ability to rank compounds whether they fail well-established pesticide-likeness rules or not, and offer an efficient way to prioritize (class-specific) pesticides. These findings are valuable for the efficient estimation of pesticide-likeness of vast chemical libraries in the field of agrochemical discovery. Graphical AbstractQuantitative models for pesticide-likeness were derived using the concept of desirability functions parameterized for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings.

  20. An Assessment of the Quantitative Literacy of Undergraduate Students

    ERIC Educational Resources Information Center

    Wilkins, Jesse L. M.

    2016-01-01

    Quantitative literacy (QLT) represents an underlying higher-order construct that accounts for a person's willingness to engage in quantitative situations in everyday life. The purpose of this study is to retest the construct validity of a model of quantitative literacy (Wilkins, 2010). In this model, QLT represents a second-order factor that…

  1. Three-dimensional modeling and quantitative analysis of gap junction distributions in cardiac tissue.

    PubMed

    Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W

    2011-11-01

    Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.

  2. Quantitative Genetic Modeling of the Parental Care Hypothesis for the Evolution of Endothermy

    PubMed Central

    Bacigalupe, Leonardo D.; Moore, Allen J.; Nespolo, Roberto F.; Rezende, Enrico L.; Bozinovic, Francisco

    2017-01-01

    There are two heuristic explanations proposed for the evolution of endothermy in vertebrates: a correlated response to selection for stable body temperatures, or as a correlated response to increased activity. Parental care has been suggested as a major driving force in this context given its impact on the parents' activity levels and energy budgets, and in the offspring's growth rates due to food provisioning and controlled incubation temperature. This results in a complex scenario involving multiple traits and transgenerational fitness benefits that can be hard to disentangle, quantify and ultimately test. Here we demonstrate how standard quantitative genetic models of maternal effects can be applied to study the evolution of endothermy, focusing on the interplay between daily energy expenditure (DEE) of the mother and growth rates of the offspring. Our model shows that maternal effects can dramatically exacerbate evolutionary responses to selection in comparison to regular univariate models (breeder's equation). This effect would emerge from indirect selection mediated by maternal effects concomitantly with a positive genetic covariance between DEE and growth rates. The multivariate nature of selection, which could favor a higher DEE, higher growth rates or both, might partly explain how high turnover rates were continuously favored in a self-reinforcing process. Overall, our quantitative genetic analysis provides support for the parental care hypothesis for the evolution of endothermy. We contend that much has to be gained from quantifying maternal and developmental effects on metabolic and thermoregulatory variation during adulthood. PMID:29311952

  3. Charge-exchange plasma environment for an ion drive spacecraft. [a model for describing mercury ion engines and its effect on spacecraft subsystems

    NASA Technical Reports Server (NTRS)

    Kaufman, H. R.; Carruth, M. R., Jr.

    1979-01-01

    The charge exchange plasma environment around a spacecraft that uses mercury ion thrusters for propulsion is described. The interactions between the plasma environment and the spacecraft are determined and a model which describes the propagation of the mercury charge exchange plasma is discussed. The model is extended to describe the flow of the molybdenum component of the charge exchange plasma. The uncertainties in the models for various conditions are discussed and current drain to the solar array, charge exchange plasma material deposition, and the effects of space plasma on the charge exchange plasma propagation are addressed.

  4. Whole-brain ex-vivo quantitative MRI of the cuprizone mouse model

    PubMed Central

    Hurley, Samuel A.; Vernon, Anthony C.; Torres, Joel; Dell’Acqua, Flavio; Williams, Steve C.R.; Cash, Diana

    2016-01-01

    Myelin is a critical component of the nervous system and a major contributor to contrast in Magnetic Resonance (MR) images. However, the precise contribution of myelination to multiple MR modalities is still under debate. The cuprizone mouse is a well-established model of demyelination that has been used in several MR studies, but these have often imaged only a single slice and analysed a small region of interest in the corpus callosum. We imaged and analyzed the whole brain of the cuprizone mouse ex-vivo using high-resolution quantitative MR methods (multi-component relaxometry, Diffusion Tensor Imaging (DTI) and morphometry) and found changes in multiple regions, including the corpus callosum, cerebellum, thalamus and hippocampus. The presence of inflammation, confirmed with histology, presents difficulties in isolating the sensitivity and specificity of these MR methods to demyelination using this model. PMID:27833805

  5. Quantitative Microbial Risk Assessment Models for Consumption of Raw Vegetables Irrigated with Reclaimed Water

    PubMed Central

    Hamilton, Andrew J.; Stagnitti, Frank; Premier, Robert; Boland, Anne-Maree; Hale, Glenn

    2006-01-01

    Quantitative microbial risk assessment models for estimating the annual risk of enteric virus infection associated with consuming raw vegetables that have been overhead irrigated with nondisinfected secondary treated reclaimed water were constructed. We ran models for several different scenarios of crop type, viral concentration in effluent, and time since last irrigation event. The mean annual risk of infection was always less for cucumber than for broccoli, cabbage, or lettuce. Across the various crops, effluent qualities, and viral decay rates considered, the annual risk of infection ranged from 10−3 to 10−1 when reclaimed-water irrigation ceased 1 day before harvest and from 10−9 to 10−3 when it ceased 2 weeks before harvest. Two previously published decay coefficients were used to describe the die-off of viruses in the environment. For all combinations of crop type and effluent quality, application of the more aggressive decay coefficient led to annual risks of infection that satisfied the commonly propounded benchmark of ≤10−4, i.e., one infection or less per 10,000 people per year, providing that 14 days had elapsed since irrigation with reclaimed water. Conversely, this benchmark was not attained for any combination of crop and water quality when this withholding period was 1 day. The lower decay rate conferred markedly less protection, with broccoli and cucumber being the only crops satisfying the 10−4 standard for all water qualities after a 14-day withholding period. Sensitivity analyses on the models revealed that in nearly all cases, variation in the amount of produce consumed had the most significant effect on the total uncertainty surrounding the estimate of annual infection risk. The models presented cover what would generally be considered to be worst-case scenarios: overhead irrigation and consumption of vegetables raw. Practices such as subsurface, furrow, or drip irrigation and postharvest washing/disinfection and food preparation

  6. Non-manipulation quantitative designs.

    PubMed

    Rumrill, Phillip D

    2004-01-01

    The article describes non-manipulation quantitative designs of two types, correlational and causal comparative studies. Both of these designs are characterized by the absence of random assignment of research participants to conditions or groups and non-manipulation of the independent variable. Without random selection or manipulation of the independent variable, no attempt is made to draw causal inferences regarding relationships between independent and dependent variables. Nonetheless, non-manipulation studies play an important role in rehabilitation research, as described in this article. Examples from the contemporary rehabilitation literature are included. Copyright 2004 IOS Press

  7. Satellite Contributions to the Quantitative Characterization of Biomass Burning for Climate Modeling

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-01-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  8. Quantitative Structure – Property Relationship Modeling of Remote Liposome Loading Of Drugs

    PubMed Central

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-01-01

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a dataset including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and five-fold external validation. The external prediction accuracy for binary models was as high as 91–96%; for continuous models the mean coefficient R2 for regression between predicted versus observed values was 0.76–0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. PMID:22154932

  9. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model.

    PubMed

    Yoshioka, S; Matsuhana, B; Tanaka, S; Inouye, Y; Oshima, N; Kinoshita, S

    2011-01-06

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model.

  10. Vibrational energy transport in acetylbenzonitrile described by an ab initio-based quantum tier model

    NASA Astrophysics Data System (ADS)

    Fujisaki, Hiroshi; Yagi, Kiyoshi; Kikuchi, Hiroto; Takami, Toshiya; Stock, Gerhard

    2017-01-01

    Performing comprehensive quantum-chemical calculations, a vibrational Hamiltonian of acetylbenzonitrile is constructed, on the basis of which a quantum-mechanical "tier model" is developed that describes the vibrational dynamics following excitation of the CN stretch mode. Taking into account 36 vibrational modes and cubic and quartic anharmonic couplings between up to three different modes, the tier model calculations are shown to qualitatively reproduce the main findings of the experiments of Rubtsov and coworkers (2011), including the energy relaxation of the initially excited CN mode and the structure-dependent vibrational transport. Moreover, the calculations suggest that the experimentally measured cross-peak among the CN and CO modes does not correspond to direct excitation of the CO normal mode but rather reflects excited low-frequency vibrations that anharmonically couple to the CO mode. Complementary quasiclassical trajectory calculations are found to be in good overall agreement with the quantum calculations.

  11. [Influence of Spectral Pre-Processing on PLS Quantitative Model of Detecting Cu in Navel Orange by LIBS].

    PubMed

    Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui

    2015-05-01

    Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.

  12. Quantitative Modeling of Entangled Polymer Rheology: Experiments, Tube Models and Slip-Link Simulations

    NASA Astrophysics Data System (ADS)

    Desai, Priyanka Subhash

    Rheology properties are sensitive indicators of molecular structure and dynamics. The relationship between rheology and polymer dynamics is captured in the constitutive model, which, if accurate and robust, would greatly aid molecular design and polymer processing. This dissertation is thus focused on building accurate and quantitative constitutive models that can help predict linear and non-linear viscoelasticity. In this work, we have used a multi-pronged approach based on the tube theory, coarse-grained slip-link simulations, and advanced polymeric synthetic and characterization techniques, to confront some of the outstanding problems in entangled polymer rheology. First, we modified simple tube based constitutive equations in extensional rheology and developed functional forms to test the effect of Kuhn segment alignment on a) tube diameter enlargement and b) monomeric friction reduction between subchains. We, then, used these functional forms to model extensional viscosity data for polystyrene (PS) melts and solutions. We demonstrated that the idea of reduction in segmental friction due to Kuhn alignment is successful in explaining the qualitative difference between melts and solutions in extension as revealed by recent experiments on PS. Second, we compiled literature data and used it to develop a universal tube model parameter set and prescribed their values and uncertainties for 1,4-PBd by comparing linear viscoelastic G' and G" mastercurves for 1,4-PBds of various branching architectures. The high frequency transition region of the mastercurves superposed very well for all the 1,4-PBds irrespective of their molecular weight and architecture, indicating universality in high frequency behavior. Therefore, all three parameters of the tube model were extracted from this high frequency transition region alone. Third, we compared predictions of two versions of the tube model, Hierarchical model and BoB model against linear viscoelastic data of blends of 1,4-PBd

  13. Quantitative skills as a graduate learning outcome of university science degree programmes: student performance explored through theplanned-enacted-experiencedcurriculum model

    NASA Astrophysics Data System (ADS)

    Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn

    2016-07-01

    Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result, which examined 10 mathematical and statistical sub-topics. Second, the study established an evidential baseline of students' quantitative skills performance and confidence levels by piloting the QSASS with 187 final-year biosciences students at a research-intensive university. The study is framed within the planned-enacted-experienced curriculum model and contributes to science reform efforts focused on enhancing the quantitative skills of university graduates, particularly in the biosciences. The results found, on average, weak performance and low confidence on the QSASS, suggesting divergence between academics' intentions and students' experiences of learning quantitative skills. Implications for curriculum design and future studies are discussed.

  14. Transferable atomistic model to describe the energetics of zirconia

    NASA Astrophysics Data System (ADS)

    Wilson, Mark; Schönberger, Uwe; Finnis, Michael W.

    1996-10-01

    We have investigated the energies of a number of phases of ZrO2 using models of an increasing degree of sophistication: the simple ionic model, the polarizable ion model, the compressible ion model, and finally a model including quadrupole polarizability of the oxygen ions. The three structures which are observed with increasing temperatures are monoclinic, tetragonal, and cubic (fluorite). Besides these we have studied some hypothetical structures which certain potentials erroneously predict or which occur in other oxides with this stoichiometry, e.g., the α-PbO2 structure and rutile. We have also performed ab initio density functional calculations with the full-potential linear combination of muffin-tin orbitals method to investigate the cubic-tetragonal distortion. A detailed comparison is made between the results using classical potentials, the experimental data, and our own and other ab initio results. The factors which stabilize the various structure are analyzed. We find the only genuinely transferable model is the one including compressible ions and anion polarizability to the quadrupole level.

  15. A visual metaphor describing neural dynamics in schizophrenia.

    PubMed

    van Beveren, Nico J M; de Haan, Lieuwe

    2008-07-09

    In many scientific disciplines the use of a metaphor as an heuristic aid is not uncommon. A well known example in somatic medicine is the 'defense army metaphor' used to characterize the immune system. In fact, probably a large part of the everyday work of doctors consists of 'translating' scientific and clinical information (i.e. causes of disease, percentage of success versus risk of side-effects) into information tailored to the needs and capacities of the individual patient. The ability to do so in an effective way is at least partly what makes a clinician a good communicator. Schizophrenia is a severe psychiatric disorder which affects approximately 1% of the population. Over the last two decades a large amount of molecular-biological, imaging and genetic data have been accumulated regarding the biological underpinnings of schizophrenia. However, it remains difficult to understand how the characteristic symptoms of schizophrenia such as hallucinations and delusions are related to disturbances on the molecular-biological level. In general, psychiatry seems to lack a conceptual framework with sufficient explanatory power to link the mental- and molecular-biological domains. Here, we present an essay-like study in which we propose to use visualized concepts stemming from the theory on dynamical complex systems as a 'visual metaphor' to bridge the mental- and molecular-biological domains in schizophrenia. We first describe a computer model of neural information processing; we show how the information processing in this model can be visualized, using concepts from the theory on complex systems. We then describe two computer models which have been used to investigate the primary theory on schizophrenia, the neurodevelopmental model, and show how disturbed information processing in these two computer models can be presented in terms of the visual metaphor previously described. Finally, we describe the effects of dopamine neuromodulation, of which disturbances have

  16. Appreciating the difference between design-based and model-based sampling strategies in quantitative morphology of the nervous system.

    PubMed

    Geuna, S

    2000-11-20

    Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.

  17. Quantitative study of FORC diagrams in thermally corrected Stoner- Wohlfarth nanoparticles systems

    NASA Astrophysics Data System (ADS)

    De Biasi, E.; Curiale, J.; Zysler, R. D.

    2016-12-01

    The use of FORC diagrams is becoming increasingly popular among researchers devoted to magnetism and magnetic materials. However, a thorough interpretation of this kind of diagrams, in order to achieve quantitative information, requires an appropriate model of the studied system. For that reason most of the FORC studies are used for a qualitative analysis. In magnetic systems thermal fluctuations "blur" the signatures of the anisotropy, volume and particle interactions distributions, therefore thermal effects in nanoparticles systems conspire against a proper interpretation and analysis of these diagrams. Motivated by this fact, we have quantitatively studied the degree of accuracy of the information extracted from FORC diagrams for the special case of single-domain thermal corrected Stoner- Wohlfarth (easy axes along the external field orientation) nanoparticles systems. In this work, the starting point is an analytical model that describes the behavior of a magnetic nanoparticles system as a function of field, anisotropy, temperature and measurement time. In order to study the quantitative degree of accuracy of our model, we built FORC diagrams for different archetypical cases of magnetic nanoparticles. Our results show that from the quantitative information obtained from the diagrams, under the hypotheses of the proposed model, is possible to recover the features of the original system with accuracy above 95%. This accuracy is improved at low temperatures and also it is possible to access to the anisotropy distribution directly from the FORC coercive field profile. Indeed, our simulations predict that the volume distribution plays a secondary role being the mean value and its deviation the only important parameters. Therefore it is possible to obtain an accurate result for the inversion and interaction fields despite the features of the volume distribution.

  18. Quantitative modeling and optimization of magnetic tweezers.

    PubMed

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H

    2009-06-17

    Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply > or = 40 pN stretching forces on approximately 1-microm tethered beads.

  19. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  20. A new settling velocity model to describe secondary sedimentation.

    PubMed

    Ramin, Elham; Wágner, Dorottya S; Yde, Lars; Binning, Philip J; Rasmussen, Michael R; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-12-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM(ZS). In addition, correlations between the Herschel-Bulkley rheological model parameters and sludge concentration were identified with data from batch rheological experiments. A 2-D axisymmetric CFD model of a circular SST containing the new settling velocity and rheological model was validated with full-scale measurements. Finally, it was shown that the representation of compression settling in the CFD model can significantly influence the prediction of sludge distribution in the SSTs under dry- and wet-weather flow conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Bound Pool Fractions Complement Diffusion Measures to Describe White Matter Micro and Macrostructure

    PubMed Central

    Stikov, Nikola; Perry, Lee M.; Mezer, Aviv; Rykhlevskaia, Elena; Wandell, Brian A.; Pauly, John M.; Dougherty, Robert F.

    2010-01-01

    Diffusion imaging and bound pool fraction (BPF) mapping are two quantitative magnetic resonance imaging techniques that measure microstructural features of the white matter of the brain. Diffusion imaging provides a quantitative measure of the diffusivity of water in tissue. BPF mapping is a quantitative magnetization transfer (qMT) technique that estimates the proportion of exchanging protons bound to macromolecules, such as those found in myelin, and is thus a more direct measure of myelin content than diffusion. In this work, we combine BPF estimates of macromolecular content with measurements of diffusivity within human white matter tracts. Within the white matter, the correlation between BPFs and diffusivity measures such as fractional anisotropy and radial diffusivity was modest, suggesting that diffusion tensor imaging and bound pool fractions are complementary techniques. We found that several major tracts have high BPF, suggesting a higher density of myelin in these tracts. We interpret these results in the context of a quantitative tissue model. PMID:20828622

  2. High-throughput real-time quantitative reverse transcription PCR.

    PubMed

    Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F

    2006-02-01

    Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.

  3. In silico quantitative structure-toxicity relationship study of aromatic nitro compounds.

    PubMed

    Pasha, Farhan Ahmad; Neaz, Mohammad Morshed; Cho, Seung Joo; Ansari, Mohiuddin; Mishra, Sunil Kumar; Tiwari, Sharvan

    2009-05-01

    Small molecules often have toxicities that are a function of molecular structural features. Minor variations in structural features can make large difference in such toxicity. Consequently, in silico techniques may be used to correlate such molecular toxicities with their structural features. Relative to nine different sets of aromatic nitro compounds having known observed toxicities against different targets, we developed ligand-based 2D quantitative structure-toxicity relationship models using 20 selected topological descriptors. The topological descriptors have several advantages such as conformational independency, facile and less time-consuming computation to yield good results. Multiple linear regression analysis was used to correlate variations of toxicity with molecular properties. The information index on molecular size, lopping centric index and Kier flexibility index were identified as fundamental descriptors for different kinds of toxicity, and further showed that molecular size, branching and molecular flexibility might be particularly important factors in quantitative structure-toxicity relationship analysis. This study revealed that topological descriptor-guided quantitative structure-toxicity relationship provided a very useful, cost and time-efficient, in silico tool for describing small-molecule toxicities.

  4. Multiplicative effects model with internal standard in mobile phase for quantitative liquid chromatography-mass spectrometry.

    PubMed

    Song, Mi; Chen, Zeng-Ping; Chen, Yao; Jin, Jing-Wen

    2014-07-01

    Liquid chromatography-mass spectrometry assays suffer from signal instability caused by the gradual fouling of the ion source, vacuum instability, aging of the ion multiplier, etc. To address this issue, in this contribution, an internal standard was added into the mobile phase. The internal standard was therefore ionized and detected together with the analytes of interest by the mass spectrometer to ensure that variations in measurement conditions and/or instrument have similar effects on the signal contributions of both the analytes of interest and the internal standard. Subsequently, based on the unique strategy of adding internal standard in mobile phase, a multiplicative effects model was developed for quantitative LC-MS assays and tested on a proof of concept model system: the determination of amino acids in water by LC-MS. The experimental results demonstrated that the proposed method could efficiently mitigate the detrimental effects of continuous signal variation, and achieved quantitative results with average relative predictive error values in the range of 8.0-15.0%, which were much more accurate than the corresponding results of conventional internal standard method based on the peak height ratio and partial least squares method (their average relative predictive error values were as high as 66.3% and 64.8%, respectively). Therefore, it is expected that the proposed method can be developed and extended in quantitative LC-MS analysis of more complex systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A novel approach for evaluating the performance of real time quantitative loop-mediated isothermal amplification-based methods.

    PubMed

    Nixon, Gavin J; Svenstrup, Helle F; Donald, Carol E; Carder, Caroline; Stephenson, Judith M; Morris-Jones, Stephen; Huggett, Jim F; Foy, Carole A

    2014-12-01

    Molecular diagnostic measurements are currently underpinned by the polymerase chain reaction (PCR). There are also a number of alternative nucleic acid amplification technologies, which unlike PCR, work at a single temperature. These 'isothermal' methods, reportedly offer potential advantages over PCR such as simplicity, speed and resistance to inhibitors and could also be used for quantitative molecular analysis. However there are currently limited mechanisms to evaluate their quantitative performance, which would assist assay development and study comparisons. This study uses a sexually transmitted infection diagnostic model in combination with an adapted metric termed isothermal doubling time (IDT), akin to PCR efficiency, to compare quantitative PCR and quantitative loop-mediated isothermal amplification (qLAMP) assays, and to quantify the impact of matrix interference. The performance metric described here facilitates the comparison of qLAMP assays that could assist assay development and validation activities.

  6. XML Encoding of Features Describing Rule-Based Modeling of Reaction Networks with Multi-Component Molecular Complexes

    PubMed Central

    Blinov, Michael L.; Moraru, Ion I.

    2011-01-01

    Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833

  7. Homogeneous Diffusion Solid Model as a Realistic Approach to Describe Adsorption onto Materials with Different Geometries.

    PubMed

    Sabio, E; Zamora, F; González-García, C M; Ledesma, B; Álvarez-Murillo, A; Román, S

    2016-12-01

    In this work, the adsorption kinetics of p-nitrophenol (PNP) onto several commercial activated carbons (ACs) with different textural and geometrical characteristics was studied. For this aim, a homogeneous diffusion solid model (HDSM) was used, which does take the adsorbent shape into account. The HDSM was solved by means of the finite element method (FEM) using the commercial software COMSOL. The different kinetic patterns observed in the experiments carried out can be described by the developed model, which shows that the sharp drop of adsorption rate observed in some samples is caused by the formation of a concentration wave. The model allows one to visualize the changes in concentration taking place in both liquid and solid phases, which enables us to link the kinetic behaviour with the main features of the carbon samples.

  8. Study on quantitative risk assessment model of the third party damage for natural gas pipelines based on fuzzy comprehensive assessment

    NASA Astrophysics Data System (ADS)

    Qiu, Zeyang; Liang, Wei; Wang, Xue; Lin, Yang; Zhang, Meng

    2017-05-01

    As an important part of national energy supply system, transmission pipelines for natural gas are possible to cause serious environmental pollution, life and property loss in case of accident. The third party damage is one of the most significant causes for natural gas pipeline system accidents, and it is very important to establish an effective quantitative risk assessment model of the third party damage for reducing the number of gas pipelines operation accidents. Against the third party damage accident has the characteristics such as diversity, complexity and uncertainty, this paper establishes a quantitative risk assessment model of the third party damage based on Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE). Firstly, risk sources of third party damage should be identified exactly, and the weight of factors could be determined via improved AHP, finally the importance of each factor is calculated by fuzzy comprehensive evaluation model. The results show that the quantitative risk assessment model is suitable for the third party damage of natural gas pipelines and improvement measures could be put forward to avoid accidents based on the importance of each factor.

  9. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  10. Quantitative Microbial Risk Assessment Tutorial - Primer

    EPA Science Inventory

    This document provides a Quantitative Microbial Risk Assessment (QMRA) primer that organizes QMRA tutorials. The tutorials describe functionality of a QMRA infrastructure, guide the user through software use and assessment options, provide step-by-step instructions for implementi...

  11. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    NASA Astrophysics Data System (ADS)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-09-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mol N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3-32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. Under high irradiance, non-constitutive mixotrophy appreciably increases annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. In this ecosystem, non-constitutive mixotrophy is also observed to have an indirect stimulating effect on diatoms. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that contrasting forms of mixotrophy have different

  12. Quantitative Research in Written Composition.

    ERIC Educational Resources Information Center

    Gebhard, Ann O.

    Offered as an introductory guide to teachers interested in approaching written English as a "second dialect" that students must master, this review covers quantitative investigations of written language. The first section deals with developmental studies, describing how a variety of researchers have related written structure to writer maturity.…

  13. Stochastic GARCH dynamics describing correlations between stocks

    NASA Astrophysics Data System (ADS)

    Prat-Ortega, G.; Savel'ev, S. E.

    2014-09-01

    The ARCH and GARCH processes have been successfully used for modelling price dynamics such as stock returns or foreign exchange rates. Analysing the long range correlations between stocks, we propose a model, based on the GARCH process, which is able to describe the main characteristics of the stock price correlations, including the mean, variance, probability density distribution and the noise spectrum.

  14. Quantitative Microbial Risk Assessment and Infectious Disease Transmission Modeling of Waterborne Enteric Pathogens.

    PubMed

    Brouwer, Andrew F; Masters, Nina B; Eisenberg, Joseph N S

    2018-04-20

    Waterborne enteric pathogens remain a global health threat. Increasingly, quantitative microbial risk assessment (QMRA) and infectious disease transmission modeling (IDTM) are used to assess waterborne pathogen risks and evaluate mitigation. These modeling efforts, however, have largely been conducted independently for different purposes and in different settings. In this review, we examine the settings where each modeling strategy is employed. QMRA research has focused on food contamination and recreational water in high-income countries (HICs) and drinking water and wastewater in low- and middle-income countries (LMICs). IDTM research has focused on large outbreaks (predominately LMICs) and vaccine-preventable diseases (LMICs and HICs). Human ecology determines the niches that pathogens exploit, leading researchers to focus on different risk assessment research strategies in different settings. To enhance risk modeling, QMRA and IDTM approaches should be integrated to include dynamics of pathogens in the environment and pathogen transmission through populations.

  15. Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments

    NASA Astrophysics Data System (ADS)

    Atwal, Gurinder S.; Kinney, Justin B.

    2016-03-01

    A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.

  16. A quantitative study on magnesium alloy stent biodegradation.

    PubMed

    Gao, Yuanming; Wang, Lizhen; Gu, Xuenan; Chu, Zhaowei; Guo, Meng; Fan, Yubo

    2018-06-06

    Insufficient scaffolding time in the process of rapid corrosion is the main problem of magnesium alloy stent (MAS). Finite element method had been used to investigate corrosion of MAS. However, related researches mostly described all elements suffered corrosion in view of one-dimensional corrosion. Multi-dimensional corrosions significantly influence mechanical integrity of MAS structures such as edges and corners. In this study, the effects of multi-dimensional corrosion were studied using experiment quantitatively, then a phenomenological corrosion model was developed to consider these effects. We implemented immersion test with magnesium alloy (AZ31B) cubes, which had different numbers of exposed surfaces to analyze differences of dimension. It was indicated that corrosion rates of cubes are almost proportional to their exposed-surface numbers, especially when pitting corrosions are not marked. The cubes also represented the hexahedron elements in simulation. In conclusion, corrosion rate of every element accelerates by increasing corrosion-surface numbers in multi-dimensional corrosion. The damage ratios among elements with the same size are proportional to the ratios of corrosion-surface numbers under uniform corrosion. The finite element simulation using proposed model provided more details of changes of morphology and mechanics in scaffolding time by removing 25.7% of elements of MAS. The proposed corrosion model reflected the effects of multi-dimension on corrosions. It would be used to predict degradation process of MAS quantitatively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?

    NASA Astrophysics Data System (ADS)

    Gasbarro, Andrew

    2018-03-01

    In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.

  18. An Inside View: The Utility of Quantitative Observation in Understanding College Educational Experiences

    ERIC Educational Resources Information Center

    Campbell, Corbin M.

    2017-01-01

    This article describes quantitative observation as a method for understanding college educational experiences. Quantitative observation has been used widely in several fields and in K-12 education, but has had limited application to research in higher education and student affairs to date. The article describes the central tenets of quantitative…

  19. A Quantitative Gas Chromatographic Ethanol Determination.

    ERIC Educational Resources Information Center

    Leary, James J.

    1983-01-01

    Describes a gas chromatographic experiment for the quantitative determination of volume percent ethanol in water ethanol solutions. Background information, procedures, and typical results are included. Accuracy and precision of results are both on the order of two percent. (JN)

  20. Quantitative systems toxicology

    PubMed Central

    Bloomingdale, Peter; Housand, Conrad; Apgar, Joshua F.; Millard, Bjorn L.; Mager, Donald E.; Burke, John M.; Shah, Dhaval K.

    2017-01-01

    The overarching goal of modern drug development is to optimize therapeutic benefits while minimizing adverse effects. However, inadequate efficacy and safety concerns remain to be the major causes of drug attrition in clinical development. For the past 80 years, toxicity testing has consisted of evaluating the adverse effects of drugs in animals to predict human health risks. The U.S. Environmental Protection Agency recognized the need to develop innovative toxicity testing strategies and asked the National Research Council to develop a long-range vision and strategy for toxicity testing in the 21st century. The vision aims to reduce the use of animals and drug development costs through the integration of computational modeling and in vitro experimental methods that evaluates the perturbation of toxicity-related pathways. Towards this vision, collaborative quantitative systems pharmacology and toxicology modeling endeavors (QSP/QST) have been initiated amongst numerous organizations worldwide. In this article, we discuss how quantitative structure-activity relationship (QSAR), network-based, and pharmacokinetic/pharmacodynamic modeling approaches can be integrated into the framework of QST models. Additionally, we review the application of QST models to predict cardiotoxicity and hepatotoxicity of drugs throughout their development. Cell and organ specific QST models are likely to become an essential component of modern toxicity testing, and provides a solid foundation towards determining individualized therapeutic windows to improve patient safety. PMID:29308440

  1. Good practices for quantitative bias analysis.

    PubMed

    Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander

    2014-12-01

    Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage

  2. Quantitative Microbial Risk Assessment Tutorial Installation of Software for Watershed Modeling in Support of QMRA - Updated 2017

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Microbial ...

  3. Complexity-aware simple modeling.

    PubMed

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. DaMoScope and its internet graphics for the visual control of adjusting mathematical models describing experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com

    The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description ofmore » data are schematically shown. The DaMoScope codes are freely available.« less

  5. Quantitative risk assessment system (QRAS)

    NASA Technical Reports Server (NTRS)

    Tan, Zhibin (Inventor); Mosleh, Ali (Inventor); Weinstock, Robert M (Inventor); Smidts, Carol S (Inventor); Chang, Yung-Hsien (Inventor); Groen, Francisco J (Inventor); Swaminathan, Sankaran (Inventor)

    2001-01-01

    A quantitative risk assessment system (QRAS) builds a risk model of a system for which risk of failure is being assessed, then analyzes the risk of the system corresponding to the risk model. The QRAS performs sensitivity analysis of the risk model by altering fundamental components and quantifications built into the risk model, then re-analyzes the risk of the system using the modifications. More particularly, the risk model is built by building a hierarchy, creating a mission timeline, quantifying failure modes, and building/editing event sequence diagrams. Multiplicities, dependencies, and redundancies of the system are included in the risk model. For analysis runs, a fixed baseline is first constructed and stored. This baseline contains the lowest level scenarios, preserved in event tree structure. The analysis runs, at any level of the hierarchy and below, access this baseline for risk quantitative computation as well as ranking of particular risks. A standalone Tool Box capability exists, allowing the user to store application programs within QRAS.

  6. Molecular Modeling on Berberine Derivatives toward BuChE: An Integrated Study with Quantitative Structure-Activity Relationships Models, Molecular Docking, and Molecular Dynamics Simulations.

    PubMed

    Fang, Jiansong; Pang, Xiaocong; Wu, Ping; Yan, Rong; Gao, Li; Li, Chao; Lian, Wenwen; Wang, Qi; Liu, Ai-lin; Du, Guan-hua

    2016-05-01

    A dataset of 67 berberine derivatives for the inhibition of butyrylcholinesterase (BuChE) was studied based on the combination of quantitative structure-activity relationships models, molecular docking, and molecular dynamics methods. First, a series of berberine derivatives were reported, and their inhibitory activities toward butyrylcholinesterase (BuChE) were evaluated. By 2D- quantitative structure-activity relationships studies, the best model built by partial least-square had a conventional correlation coefficient of the training set (R(2)) of 0.883, a cross-validation correlation coefficient (Qcv2) of 0.777, and a conventional correlation coefficient of the test set (Rpred2) of 0.775. The model was also confirmed by Y-randomization examination. In addition, the molecular docking and molecular dynamics simulation were performed to better elucidate the inhibitory mechanism of three typical berberine derivatives (berberine, C2, and C55) toward BuChE. The predicted binding free energy results were consistent with the experimental data and showed that the van der Waals energy term (ΔEvdw) difference played the most important role in differentiating the activity among the three inhibitors (berberine, C2, and C55). The developed quantitative structure-activity relationships models provide details on the fine relationship linking structure and activity and offer clues for structural modifications, and the molecular simulation helps to understand the inhibitory mechanism of the three typical inhibitors. In conclusion, the results of this study provide useful clues for new drug design and discovery of BuChE inhibitors from berberine derivatives. © 2015 John Wiley & Sons A/S.

  7. Toward Quantitative Small Animal Pinhole SPECT: Assessment of Quantitation Accuracy Prior to Image Compensations

    PubMed Central

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346

  8. A Quantitative Infrared Spectroscopy Experiment.

    ERIC Educational Resources Information Center

    Krahling, Mark D.; Eliason, Robert

    1985-01-01

    Although infrared spectroscopy is used primarily for qualitative identifications, it is possible to use it as a quantitative tool as well. The use of a standard curve to determine percent methanol in a 2,2,2-trifluoroethanol sample is described. Background information, experimental procedures, and results obtained are provided. (JN)

  9. Evaluating Attitudes, Skill, and Performance in a Learning-Enhanced Quantitative Methods Course: A Structural Modeling Approach.

    ERIC Educational Resources Information Center

    Harlow, Lisa L.; Burkholder, Gary J.; Morrow, Jennifer A.

    2002-01-01

    Used a structural modeling approach to evaluate relations among attitudes, initial skills, and performance in a Quantitative Methods course that involved students in active learning. Results largely confirmed hypotheses offering support for educational reform efforts that propose actively involving students in the learning process, especially in…

  10. Safe uses of Hill's model: an exact comparison with the Adair-Klotz model

    PubMed Central

    2011-01-01

    Background The Hill function and the related Hill model are used frequently to study processes in the living cell. There are very few studies investigating the situations in which the model can be safely used. For example, it has been shown, at the mean field level, that the dose response curve obtained from a Hill model agrees well with the dose response curves obtained from a more complicated Adair-Klotz model, provided that the parameters of the Adair-Klotz model describe strongly cooperative binding. However, it has not been established whether such findings can be extended to other properties and non-mean field (stochastic) versions of the same, or other, models. Results In this work a rather generic quantitative framework for approaching such a problem is suggested. The main idea is to focus on comparing the particle number distribution functions for Hill's and Adair-Klotz's models instead of investigating a particular property (e.g. the dose response curve). The approach is valid for any model that can be mathematically related to the Hill model. The Adair-Klotz model is used to illustrate the technique. One main and two auxiliary similarity measures were introduced to compare the distributions in a quantitative way. Both time dependent and the equilibrium properties of the similarity measures were studied. Conclusions A strongly cooperative Adair-Klotz model can be replaced by a suitable Hill model in such a way that any property computed from the two models, even the one describing stochastic features, is approximately the same. The quantitative analysis showed that boundaries of the regions in the parameter space where the models behave in the same way exhibit a rather rich structure. PMID:21521501

  11. AutoQSAR: an automated machine learning tool for best-practice quantitative structure-activity relationship modeling.

    PubMed

    Dixon, Steven L; Duan, Jianxin; Smith, Ethan; Von Bargen, Christopher D; Sherman, Woody; Repasky, Matthew P

    2016-10-01

    We introduce AutoQSAR, an automated machine-learning application to build, validate and deploy quantitative structure-activity relationship (QSAR) models. The process of descriptor generation, feature selection and the creation of a large number of QSAR models has been automated into a single workflow within AutoQSAR. The models are built using a variety of machine-learning methods, and each model is scored using a novel approach. Effectiveness of the method is demonstrated through comparison with literature QSAR models using identical datasets for six end points: protein-ligand binding affinity, solubility, blood-brain barrier permeability, carcinogenicity, mutagenicity and bioaccumulation in fish. AutoQSAR demonstrates similar or better predictive performance as compared with published results for four of the six endpoints while requiring minimal human time and expertise.

  12. Concepts and challenges in quantitative pharmacology and model-based drug development.

    PubMed

    Zhang, Liping; Pfister, Marc; Meibohm, Bernd

    2008-12-01

    Model-based drug development (MBDD) has been recognized as a concept to improve the efficiency of drug development. The acceptance of MBDD from regulatory agencies, industry, and academia has been growing, yet today's drug development practice is still distinctly distant from MBDD. This manuscript is aimed at clarifying the concept of MBDD and proposing practical approaches for implementing MBDD in the pharmaceutical industry. The following concepts are defined and distinguished: PK-PD modeling, exposure-response modeling, pharmacometrics, quantitative pharmacology, and MBDD. MBDD is viewed as a paradigm and a mindset in which models constitute the instruments and aims of drug development efforts. MBDD covers the whole spectrum of the drug development process instead of being limited to a certain type of modeling technique or application area. The implementation of MBDD requires pharmaceutical companies to foster innovation and make changes at three levels: (1) to establish mindsets that are willing to get acquainted with MBDD, (2) to align processes that are adaptive to the requirements of MBDD, and (3) to create a closely collaborating organization in which all members play a role in MBDD. Pharmaceutical companies that are able to embrace the changes MBDD poses will likely be able to improve their success rate in drug development, and the beneficiaries will ultimately be the patients in need.

  13. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    PubMed Central

    Zhang, Yonghe

    2010-01-01

    Ionocovalency (IC), a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table. PMID:21151444

  14. Quantitative analysis of dinuclear manganese(II) EPR spectra

    NASA Astrophysics Data System (ADS)

    Golombek, Adina P.; Hendrich, Michael P.

    2003-11-01

    A quantitative method for the analysis of EPR spectra from dinuclear Mn(II) complexes is presented. The complex [(Me 3TACN) 2Mn(II) 2(μ-OAc) 3]BPh 4 ( 1) (Me 3TACN= N, N', N''-trimethyl-1,4,7-triazacyclononane; OAc=acetate 1-; BPh 4=tetraphenylborate 1-) was studied with EPR spectroscopy at X- and Q-band frequencies, for both perpendicular and parallel polarizations of the microwave field, and with variable temperature (2-50 K). Complex 1 is an antiferromagnetically coupled dimer which shows signals from all excited spin manifolds, S=1 to 5. The spectra were simulated with diagonalization of the full spin Hamiltonian which includes the Zeeman and zero-field splittings of the individual manganese sites within the dimer, the exchange and dipolar coupling between the two manganese sites of the dimer, and the nuclear hyperfine coupling for each manganese ion. All possible transitions for all spin manifolds were simulated, with the intensities determined from the calculated probability of each transition. In addition, the non-uniform broadening of all resonances was quantitatively predicted using a lineshape model based on D- and r-strain. As the temperature is increased from 2 K, an 11-line hyperfine pattern characteristic of dinuclear Mn(II) is first observed from the S=3 manifold. D- and r-strain are the dominate broadening effects that determine where the hyperfine pattern will be resolved. A single unique parameter set was found to simulate all spectra arising for all temperatures, microwave frequencies, and microwave modes. The simulations are quantitative, allowing for the first time the determination of species concentrations directly from EPR spectra. Thus, this work describes the first method for the quantitative characterization of EPR spectra of dinuclear manganese centers in model complexes and proteins. The exchange coupling parameter J for complex 1 was determined ( J=-1.5±0.3 cm-1; H ex=-2J S1· S2) and found to be in agreement with a previous

  15. Quantitative risk stratification in Markov chains with limiting conditional distributions.

    PubMed

    Chan, David C; Pollett, Philip K; Weinstein, Milton C

    2009-01-01

    Many clinical decisions require patient risk stratification. The authors introduce the concept of limiting conditional distributions, which describe the equilibrium proportion of surviving patients occupying each disease state in a Markov chain with death. Such distributions can quantitatively describe risk stratification. The authors first establish conditions for the existence of a positive limiting conditional distribution in a general Markov chain and describe a framework for risk stratification using the limiting conditional distribution. They then apply their framework to a clinical example of a treatment indicated for high-risk patients, first to infer the risk of patients selected for treatment in clinical trials and then to predict the outcomes of expanding treatment to other populations of risk. For the general chain, a positive limiting conditional distribution exists only if patients in the earliest state have the lowest combined risk of progression or death. The authors show that in their general framework, outcomes and population risk are interchangeable. For the clinical example, they estimate that previous clinical trials have selected the upper quintile of patient risk for this treatment, but they also show that expanded treatment would weakly dominate this degree of targeted treatment, and universal treatment may be cost-effective. Limiting conditional distributions exist in most Markov models of progressive diseases and are well suited to represent risk stratification quantitatively. This framework can characterize patient risk in clinical trials and predict outcomes for other populations of risk.

  16. Quantitative structure-activation barrier relationship modeling for Diels-Alder ligations utilizing quantum chemical structural descriptors.

    PubMed

    Nandi, Sisir; Monesi, Alessandro; Drgan, Viktor; Merzel, Franci; Novič, Marjana

    2013-10-30

    In the present study, we show the correlation of quantum chemical structural descriptors with the activation barriers of the Diels-Alder ligations. A set of 72 non-catalysed Diels-Alder reactions were subjected to quantitative structure-activation barrier relationship (QSABR) under the framework of theoretical quantum chemical descriptors calculated solely from the structures of diene and dienophile reactants. Experimental activation barrier data were obtained from literature. Descriptors were computed using Hartree-Fock theory using 6-31G(d) basis set as implemented in Gaussian 09 software. Variable selection and model development were carried out by stepwise multiple linear regression methodology. Predictive performance of the quantitative structure-activation barrier relationship (QSABR) model was assessed by training and test set concept and by calculating leave-one-out cross-validated Q2 and predictive R2 values. The QSABR model can explain and predict 86.5% and 80% of the variances, respectively, in the activation energy barrier training data. Alternatively, a neural network model based on back propagation of errors was developed to assess the nonlinearity of the sought correlations between theoretical descriptors and experimental reaction barriers. A reasonable predictability for the activation barrier of the test set reactions was obtained, which enabled an exploration and interpretation of the significant variables responsible for Diels-Alder interaction between dienes and dienophiles. Thus, studies in the direction of QSABR modelling that provide efficient and fast prediction of activation barriers of the Diels-Alder reactions turn out to be a meaningful alternative to transition state theory based computation.

  17. Marginal iodide deficiency and thyroid function: dose-response analysis for quantitative pharmacokinetic modeling.

    PubMed

    Gilbert, M E; McLanahan, E D; Hedge, J; Crofton, K M; Fisher, J W; Valentín-Blasini, L; Blount, B C

    2011-04-28

    Severe iodine deficiency (ID) results in adverse health outcomes and remains a benchmark for understanding the effects of developmental hypothyroidism. The implications of marginal ID, however, remain less well known. The current study examined the relationship between graded levels of ID in rats and serum thyroid hormones, thyroid iodine content, and urinary iodide excretion. The goals of this study were to provide parametric and dose-response information for development of a quantitative model of the thyroid axis. Female Long Evans rats were fed casein-based diets containing varying iodine (I) concentrations for 8 weeks. Diets were created by adding 975, 200, 125, 25, or 0 μg/kg I to the base diet (~25 μg I/kg chow) to produce 5 nominal I levels, ranging from excess (basal+added I, Treatment 1: 1000 μg I/kg chow) to deficient (Treatment 5: 25 μg I/kg chow). Food intake and body weight were monitored throughout and on 2 consecutive days each week over the 8-week exposure period, animals were placed in metabolism cages to capture urine. Food, water intake, and body weight gain did not differ among treatment groups. Serum T4 was dose-dependently reduced relative to Treatment 1 with significant declines (19 and 48%) at the two lowest I groups, and no significant changes in serum T3 or TSH were detected. Increases in thyroid weight and decreases in thyroidal and urinary iodide content were observed as a function of decreasing I in the diet. Data were compared with predictions from a recently published biologically based dose-response (BBDR) model for ID. Relative to model predictions, female Long Evans rats under the conditions of this study appeared more resilient to low I intake. These results challenge existing models and provide essential information for development of quantitative BBDR models for ID during pregnancy and lactation. Published by Elsevier Ireland Ltd.

  18. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  19. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches.

    PubMed

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations.

  20. Pharmacodynamic Model To Describe the Concentration-Dependent Selection of Cefotaxime-Resistant Escherichia coli

    PubMed Central

    Olofsson, Sara K.; Geli, Patricia; Andersson, Dan I.; Cars, Otto

    2005-01-01

    Antibiotic dosing regimens may vary in their capacity to select mutants. Our hypothesis was that selection of a more resistant bacterial subpopulation would increase with the time within a selective window (SW), i.e., when drug concentrations fall between the MICs of two strains. An in vitro kinetic model was used to study the selection of two Escherichia coli strains with different susceptibilities to cefotaxime. The bacterial mixtures were exposed to cefotaxime for 24 h and SWs of 1, 2, 4, 8, and 12 h. A mathematical model was developed that described the selection of preexisting and newborn mutants and the post-MIC effect (PME) as functions of pharmacokinetic parameters. Our main conclusions were as follows: (i) the selection between preexisting mutants increased with the time within the SW; (ii) the emergence and selection of newborn mutants increased with the time within the SW (with a short time, only 4% of the preexisting mutants were replaced by newborn mutants, compared to the longest times, where 100% were replaced); and (iii) PME increased with the area under the concentration-time curve (AUC) and was slightly more pronounced with a long elimination half-life (T1/2) than with a short T1/2 situation, when AUC is fixed. We showed that, in a dynamic competition between strains with different levels of resistance, the appearance of newborn high-level resistant mutants from the parental strains and the PME can strongly affect the outcome of the selection and that pharmacodynamic models can be used to predict the outcome of resistance development. PMID:16304176

  1. Quantitative laser diagnostic and modeling study of C2 and CH chemistry in combustion.

    PubMed

    Köhler, Markus; Brockhinke, Andreas; Braun-Unkhoff, Marina; Kohse-Höinghaus, Katharina

    2010-04-15

    Quantitative concentration measurements of CH and C(2) have been performed in laminar, premixed, flat flames of propene and cyclopentene with varying stoichiometry. A combination of cavity ring-down (CRD) spectroscopy and laser-induced fluorescence (LIF) was used to enable sensitive detection of these species with high spatial resolution. Previously, CH and C(2) chemistry had been studied, predominantly in methane flames, to understand potential correlations of their formation and consumption. For flames of larger hydrocarbon fuels, however, quantitative information on these small intermediates is scarce, especially under fuel-rich conditions. Also, the combustion chemistry of C(2) in particular has not been studied in detail, and although it has often been observed, its role in potential build-up reactions of higher hydrocarbon species is not well understood. The quantitative measurements performed here are the first to detect both species with good spatial resolution and high sensitivity in the same experiment in flames of C(3) and C(5) fuels. The experimental profiles were compared with results of combustion modeling to reveal details of the formation and consumption of these important combustion molecules, and the investigation was devoted to assist the further understanding of the role of C(2) and of its potential chemical interdependences with CH and other small radicals.

  2. Quantitative targeting maps based on experimental investigations for a branched tube model in magnetic drug targeting

    NASA Astrophysics Data System (ADS)

    Gitter, K.; Odenbach, S.

    2011-12-01

    Magnetic drug targeting (MDT), because of its high targeting efficiency, is a promising approach for tumour treatment. Unwanted side effects are considerably reduced, since the nanoparticles are concentrated within the target region due to the influence of a magnetic field. Nevertheless, understanding the transport phenomena of nanoparticles in an artery system is still challenging. This work presents experimental results for a branched tube model. Quantitative results describe, for example, the net amount of nanoparticles that are targeted towards the chosen region due to the influence of a magnetic field. As a result of measurements, novel drug targeting maps, combining, e.g. the magnetic volume force, the position of the magnet and the net amount of targeted nanoparticles, are presented. The targeting maps are valuable for evaluation and comparison of setups and are also helpful for the design and the optimisation of a magnet system with an appropriate strength and distribution of the field gradient. The maps indicate the danger of accretion within the tube and also show the promising result of magnetic drug targeting that up to 97% of the nanoparticles were successfully targeted.

  3. Development and evaluation of a model-based downscatter compensation method for quantitative I-131 SPECT

    PubMed Central

    Song, Na; Du, Yong; He, Bin; Frey, Eric C.

    2011-01-01

    Purpose: The radionuclide 131I has found widespread use in targeted radionuclide therapy (TRT), partly due to the fact that it emits photons that can be imaged to perform treatment planning or posttherapy dose verification as well as beta rays that are suitable for therapy. In both the treatment planning and dose verification applications, it is necessary to estimate the activity distribution in organs or tumors at several time points. In vivo estimates of the 131I activity distribution at each time point can be obtained from quantitative single-photon emission computed tomography (QSPECT) images and organ activity estimates can be obtained either from QSPECT images or quantification of planar projection data. However, in addition to the photon used for imaging, 131I decay results in emission of a number of other higher-energy photons with significant abundances. These higher-energy photons can scatter in the body, collimator, or detector and be counted in the 364 keV photopeak energy window, resulting in reduced image contrast and degraded quantitative accuracy; these photons are referred to as downscatter. The goal of this study was to develop and evaluate a model-based downscatter compensation method specifically designed for the compensation of high-energy photons emitted by 131I and detected in the imaging energy window. Methods: In the evaluation study, we used a Monte Carlo simulation (MCS) code that had previously been validated for other radionuclides. Thus, in preparation for the evaluation study, we first validated the code for 131I imaging simulation by comparison with experimental data. Next, we assessed the accuracy of the downscatter model by comparing downscatter estimates with MCS results. Finally, we combined the downscatter model with iterative reconstruction-based compensation for attenuation (A) and scatter (S) and the full (D) collimator-detector response of the 364 keV photons to form a comprehensive compensation method. We evaluated this

  4. Qualitative and Quantitative Distinctions in Personality Disorder

    PubMed Central

    Wright, Aidan G. C.

    2011-01-01

    The “categorical-dimensional debate” has catalyzed a wealth of empirical advances in the study of personality pathology. However, this debate is merely one articulation of a broader conceptual question regarding whether to define and describe psychopathology as a quantitatively extreme expression of normal functioning or as qualitatively distinct in its process. In this paper I argue that dynamic models of personality (e.g., object-relations, cognitive-affective processing system) offer the conceptual scaffolding to reconcile these seemingly incompatible approaches to characterizing the relationship between normal and pathological personality. I propose that advances in personality assessment that sample behavior and experiences intensively provide the empirical techniques, whereas interpersonal theory offers an integrative theoretical framework, for accomplishing this goal. PMID:22804676

  5. The Physics of Earthquakes: In the Quest for a Unified Theory (or Model) That Quantitatively Describes the Entire Process of an Earthquake Rupture, From its Nucleation to the Dynamic Regime and to its Arrest

    NASA Astrophysics Data System (ADS)

    Ohnaka, M.

    2004-12-01

    For the past four decades, great progress has been made in understanding earthquake source processes. In particular, recent progress in the field of the physics of earthquakes has contributed substantially to unraveling the earthquake generation process in quantitative terms. Yet, a fundamental problem remains unresolved in this field. The constitutive law that governs the behavior of earthquake ruptures is the basis of earthquake physics, and the governing law plays a fundamental role in accounting for the entire process of an earthquake rupture, from its nucleation to the dynamic propagation to its arrest, quantitatively in a unified and consistent manner. Therefore, without establishing the rational constitutive law, the physics of earthquakes cannot be a quantitative science in a true sense, and hence it is urgent to establish the rational constitutive law. However, it has been controversial over the past two decades, and it is still controversial, what the constitutive law for earthquake ruptures ought to be, and how it should be formulated. To resolve the controversy is a necessary step towards a more complete, unified theory of earthquake physics, and now the time is ripe to do so. Because of its fundamental importance, we have to discuss thoroughly and rigorously what the constitutive law ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid evidence. There are prerequisites for the constitutive formulation. The brittle, seismogenic layer and individual faults therein are characterized by inhomogeneity, and fault inhomogeneity has profound implications for earthquake ruptures. In addition, rupture phenomena including earthquakes are inherently scale dependent; indeed, some of the physical quantities inherent in rupture exhibit scale dependence. To treat scale-dependent physical quantities inherent in the rupture over a broad scale range quantitatively in a unified and consistent manner, it is critical to

  6. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  7. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    PubMed

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  8. A Dynamic Energy Budget (DEB) model to describe Laternula elliptica (King, 1832) seasonal feeding and metabolism

    PubMed Central

    Ahn, In-Young; Guillaumot, Charlène; Danis, Bruno

    2017-01-01

    Antarctic marine organisms are adapted to an extreme environment, characterized by a very low but stable temperature and a strong seasonality in food availability arousing from variations in day length. Ocean organisms are particularly vulnerable to global climate change with some regions being impacted by temperature increase and changes in primary production. Climate change also affects the biotic components of marine ecosystems and has an impact on the distribution and seasonal physiology of Antarctic marine organisms. Knowledge on the impact of climate change in key species is highly important because their performance affects ecosystem functioning. To predict the effects of climate change on marine ecosystems, a holistic understanding of the life history and physiology of Antarctic key species is urgently needed. DEB (Dynamic Energy Budget) theory captures the metabolic processes of an organism through its entire life cycle as a function of temperature and food availability. The DEB model is a tool that can be used to model lifetime feeding, growth, reproduction, and their responses to changes in biotic and abiotic conditions. In this study, we estimate the DEB model parameters for the bivalve Laternula elliptica using literature-extracted and field data. The DEB model we present here aims at better understanding the biology of L. elliptica and its levels of adaptation to its habitat with a special focus on food seasonality. The model parameters describe a metabolism specifically adapted to low temperatures, with a low maintenance cost and a high capacity to uptake and mobilise energy, providing this organism with a level of energetic performance matching that of related species from temperate regions. It was also found that L. elliptica has a large energy reserve that allows enduring long periods of starvation. Additionally, we applied DEB parameters to time-series data on biological traits (organism condition, gonad growth) to describe the effect of a

  9. A Dynamic Energy Budget (DEB) model to describe Laternula elliptica (King, 1832) seasonal feeding and metabolism.

    PubMed

    Agüera, Antonio; Ahn, In-Young; Guillaumot, Charlène; Danis, Bruno

    2017-01-01

    Antarctic marine organisms are adapted to an extreme environment, characterized by a very low but stable temperature and a strong seasonality in food availability arousing from variations in day length. Ocean organisms are particularly vulnerable to global climate change with some regions being impacted by temperature increase and changes in primary production. Climate change also affects the biotic components of marine ecosystems and has an impact on the distribution and seasonal physiology of Antarctic marine organisms. Knowledge on the impact of climate change in key species is highly important because their performance affects ecosystem functioning. To predict the effects of climate change on marine ecosystems, a holistic understanding of the life history and physiology of Antarctic key species is urgently needed. DEB (Dynamic Energy Budget) theory captures the metabolic processes of an organism through its entire life cycle as a function of temperature and food availability. The DEB model is a tool that can be used to model lifetime feeding, growth, reproduction, and their responses to changes in biotic and abiotic conditions. In this study, we estimate the DEB model parameters for the bivalve Laternula elliptica using literature-extracted and field data. The DEB model we present here aims at better understanding the biology of L. elliptica and its levels of adaptation to its habitat with a special focus on food seasonality. The model parameters describe a metabolism specifically adapted to low temperatures, with a low maintenance cost and a high capacity to uptake and mobilise energy, providing this organism with a level of energetic performance matching that of related species from temperate regions. It was also found that L. elliptica has a large energy reserve that allows enduring long periods of starvation. Additionally, we applied DEB parameters to time-series data on biological traits (organism condition, gonad growth) to describe the effect of a

  10. Two schemes for quantitative photoacoustic tomography based on Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yubin; Yuan, Zhen, E-mail: zhenyuan@umac.mo

    Purpose: The aim of this study was to develop novel methods for photoacoustically determining the optical absorption coefficient of biological tissues using Monte Carlo (MC) simulation. Methods: In this study, the authors propose two quantitative photoacoustic tomography (PAT) methods for mapping the optical absorption coefficient. The reconstruction methods combine conventional PAT with MC simulation in a novel way to determine the optical absorption coefficient of biological tissues or organs. Specifically, the authors’ two schemes were theoretically and experimentally examined using simulations, tissue-mimicking phantoms, ex vivo, and in vivo tests. In particular, the authors explored these methods using several objects withmore » different absorption contrasts embedded in turbid media and by using high-absorption media when the diffusion approximation was not effective at describing the photon transport. Results: The simulations and experimental tests showed that the reconstructions were quantitatively accurate in terms of the locations, sizes, and optical properties of the targets. The positions of the recovered targets were accessed by the property profiles, where the authors discovered that the off center error was less than 0.1 mm for the circular target. Meanwhile, the sizes and quantitative optical properties of the targets were quantified by estimating the full width half maximum of the optical absorption property. Interestingly, for the reconstructed sizes, the authors discovered that the errors ranged from 0 for relatively small-size targets to 26% for relatively large-size targets whereas for the recovered optical properties, the errors ranged from 0% to 12.5% for different cases. Conclusions: The authors found that their methods can quantitatively reconstruct absorbing objects of different sizes and optical contrasts even when the diffusion approximation is unable to accurately describe the photon propagation in biological tissues. In particular

  11. Quantitative ultrasound molecular imaging by modeling the binding kinetics of targeted contrast agent

    NASA Astrophysics Data System (ADS)

    Turco, Simona; Tardy, Isabelle; Frinking, Peter; Wijkstra, Hessel; Mischi, Massimo

    2017-03-01

    Ultrasound molecular imaging (USMI) is an emerging technique to monitor diseases at the molecular level by the use of novel targeted ultrasound contrast agents (tUCA). These consist of microbubbles functionalized with targeting ligands with high-affinity for molecular markers of specific disease processes, such as cancer-related angiogenesis. Among the molecular markers of angiogenesis, the vascular endothelial growth factor receptor 2 (VEGFR2) is recognized to play a major role. In response, the clinical-grade tUCA BR55 was recently developed, consisting of VEGFR2-targeting microbubbles which can flow through the entire circulation and accumulate where VEGFR2 is over-expressed, thus causing selective enhancement in areas of active angiogenesis. Discrimination between bound and free microbubbles is crucial to assess cancer angiogenesis. Currently, this is done non-quantitatively by looking at the late enhancement, about 10 min after injection, or by calculation of the differential targeted enhancement, requiring the application of a high-pressure ultrasound (US) burst to destroy all the microbubbles in the acoustic field and isolate the signal coming only from bound microbubbles. In this work, we propose a novel method based on mathematical modeling of the binding kinetics during the tUCA first pass, thus reducing the acquisition time and with no need for a destructive US burst. Fitting time-intensity curves measured with USMI by the proposed model enables the assessment of cancer angiogenesis at both the vascular and molecular levels. This is achieved by estimation of quantitative parameters related to the microvascular architecture and microbubble binding. The proposed method was tested in 11 prostate-tumor bearing rats by performing USMI after injection of BR55, and showed good agreement with current USMI methods. The novel information provided by the proposed method, possibly combined with the current non-quantitative methods, may bring deeper insight into

  12. Quantitative evaluation of mucosal vascular contrast in narrow band imaging using Monte Carlo modeling

    NASA Astrophysics Data System (ADS)

    Le, Du; Wang, Quanzeng; Ramella-Roman, Jessica; Pfefer, Joshua

    2012-06-01

    Narrow-band imaging (NBI) is a spectrally-selective reflectance imaging technique for enhanced visualization of superficial vasculature. Prior clinical studies have indicated NBI's potential for detection of vasculature abnormalities associated with gastrointestinal mucosal neoplasia. While the basic mechanisms behind the increased vessel contrast - hemoglobin absorption and tissue scattering - are known, a quantitative understanding of the effect of tissue and device parameters has not been achieved. In this investigation, we developed and implemented a numerical model of light propagation that simulates NBI reflectance distributions. This was accomplished by incorporating mucosal tissue layers and vessel-like structures in a voxel-based Monte Carlo algorithm. Epithelial and mucosal layers as well as blood vessels were defined using wavelength-specific optical properties. The model was implemented to calculate reflectance distributions and vessel contrast values as a function of vessel depth (0.05 to 0.50 mm) and diameter (0.01 to 0.10 mm). These relationships were determined for NBI wavelengths of 410 nm and 540 nm, as well as broadband illumination common to standard endoscopic imaging. The effects of illumination bandwidth on vessel contrast were also simulated. Our results provide a quantitative analysis of the effect of absorption and scattering on vessel contrast. Additional insights and potential approaches for improving NBI system contrast are discussed.

  13. Quantitative analysis of single-molecule superresolution images

    PubMed Central

    Coltharp, Carla; Yang, Xinxing; Xiao, Jie

    2014-01-01

    This review highlights the quantitative capabilities of single-molecule localization-based superresolution imaging methods. In addition to revealing fine structural details, the molecule coordinate lists generated by these methods provide the critical ability to quantify the number, clustering, and colocalization of molecules with 10 – 50 nm resolution. Here we describe typical workflows and precautions for quantitative analysis of single-molecule superresolution images. These guidelines include potential pitfalls and essential control experiments, allowing critical assessment and interpretation of superresolution images. PMID:25179006

  14. Application of interleaving models to describe intrusive layers in the Deep Polar Water of the Arctic Basin

    NASA Astrophysics Data System (ADS)

    Zhurbas, Nataliya; Kuzmina, Natalia; Lyzhkov, Dmitry; Izvekova, Yulia N.

    2016-04-01

    Interleaving models of pure thermohaline and baroclinic frontal zones of finite width are applied to describe intrusions at the fronts found in the upper part of the Deep Polar Water, the Eurasian basin, under stable-stable thermohaline stratification. It is assumed that differential mixing is the main mechanism of the intrusion formation. Different parameterizations of differential mixing (Merrryfield, 2002; Kuzmina et al., 2011) are used in the models. Important parameters of interleaving such as the growth rate, vertical scale, and slope of the most unstable modes are calculated. It is found that the interleaving model of a pure thermohaline front can satisfactory describe the important parameters of intrusions observed at a thermohaline, very low baroclinicity front in the Eurasian basin, just in accordance to Merryfield (2002) findings. In the case of baroclinic front, satisfactory agreement over all the interleaving parameters is found between the model calculations and observations provided that the vertical momentum diffusivity significantly exceeds the corresponding mass diffusivity. Under specific (reasonable) constraints of the vertical momentum diffusivity, the most unstable mode has a vertical scale approximately two-three times smaller than the vertical scale of the observed intrusions. A thorough discussion of the results is presented. References Kuzmina N., Rudels B., Zhurbas V., Stipa T. On the structure and dynamical features of intrusive layering in the Eurasian Basin in the Arctic Ocean. J. Geophys. Res., 2011, 116, C00D11, doi:10.1029/2010JC006920. Merryfield W. J. Intrusions in Double-Diffusively Stable Arctic Waters: Evidence for Differential mixing? J. Phys. Oceanogr., 2002, 32, 1452-1439.

  15. Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model

    PubMed Central

    Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.

    2012-01-01

    Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315

  16. PET-based compartmental modeling of (124)I-A33 antibody: quantitative characterization of patient-specific tumor targeting in colorectal cancer.

    PubMed

    Zanzonico, Pat; Carrasquillo, Jorge A; Pandit-Taskar, Neeta; O'Donoghue, Joseph A; Humm, John L; Smith-Jones, Peter; Ruan, Shutian; Divgi, Chaitanya; Scott, Andrew M; Kemeny, Nancy E; Fong, Yuman; Wong, Douglas; Scheinberg, David; Ritter, Gerd; Jungbluth, Achem; Old, Lloyd J; Larson, Steven M

    2015-10-01

    The molecular specificity of monoclonal antibodies (mAbs) directed against tumor antigens has proven effective for targeted therapy of human cancers, as shown by a growing list of successful antibody-based drug products. We describe a novel, nonlinear compartmental model using PET-derived data to determine the "best-fit" parameters and model-derived quantities for optimizing biodistribution of intravenously injected (124)I-labeled antitumor antibodies. As an example of this paradigm, quantitative image and kinetic analyses of anti-A33 humanized mAb (also known as "A33") were performed in 11 colorectal cancer patients. Serial whole-body PET scans of (124)I-labeled A33 and blood samples were acquired and the resulting tissue time-activity data for each patient were fit to a nonlinear compartmental model using the SAAM II computer code. Excellent agreement was observed between fitted and measured parameters of tumor uptake, "off-target" uptake in bowel mucosa, blood clearance, tumor antigen levels, and percent antigen occupancy. This approach should be generally applicable to antibody-antigen systems in human tumors for which the masses of antigen-expressing tumor and of normal tissues can be estimated and for which antibody kinetics can be measured with PET. Ultimately, based on each patient's resulting "best-fit" nonlinear model, a patient-specific optimum mAb dose (in micromoles, for example) may be derived.

  17. Quantitative oxygen concentration imaging in toluene atmospheres using Dual Imaging with Modeling Evaluation

    NASA Astrophysics Data System (ADS)

    Ehn, Andreas; Jonsson, Malin; Johansson, Olof; Aldén, Marcus; Bood, Joakim

    2013-01-01

    Fluorescence lifetimes of toluene as a function of oxygen concentration in toluene/nitrogen/oxygen mixtures have been measured at room temperature using picosecond-laser excitation of the S1-S0 transition at 266 nm. The data satisfy the Stern-Volmer relation with high accuracy, providing an updated value of the Stern-Volmer slope. A newly developed fluorescence lifetime imaging scheme, called Dual Imaging with Modeling Evaluation (DIME), is evaluated and successfully demonstrated for quantitative oxygen concentration imaging in toluene-seeded O2/N2 gas mixtures.

  18. Quantitative oxygen concentration imaging in toluene atmospheres using Dual Imaging with Modeling Evaluation

    NASA Astrophysics Data System (ADS)

    Ehn, Andreas; Jonsson, Malin; Johansson, Olof; Aldén, Marcus; Bood, Joakim

    2012-12-01

    Fluorescence lifetimes of toluene as a function of oxygen concentration in toluene/nitrogen/oxygen mixtures have been measured at room temperature using picosecond-laser excitation of the S1-S0 transition at 266 nm. The data satisfy the Stern-Volmer relation with high accuracy, providing an updated value of the Stern-Volmer slope. A newly developed fluorescence lifetime imaging scheme, called Dual Imaging with Modeling Evaluation (DIME), is evaluated and successfully demonstrated for quantitative oxygen concentration imaging in toluene-seeded O2/N2 gas mixtures.

  19. Methods for Quantitative Creatinine Determination.

    PubMed

    Moore, John F; Sharer, J Daniel

    2017-04-06

    Reliable measurement of creatinine is necessary to assess kidney function, and also to quantitate drug levels and diagnostic compounds in urine samples. The most commonly used methods are based on the Jaffe principal of alkaline creatinine-picric acid complex color formation. However, other compounds commonly found in serum and urine may interfere with Jaffe creatinine measurements. Therefore, many laboratories have made modifications to the basic method to remove or account for these interfering substances. This appendix will summarize the basic Jaffe method, as well as a modified, automated version. Also described is a high performance liquid chromatography (HPLC) method that separates creatinine from contaminants prior to direct quantification by UV absorption. Lastly, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method is described that uses stable isotope dilution to reliably quantify creatinine in any sample. This last approach has been recommended by experts in the field as a means to standardize all quantitative creatinine methods against an accepted reference. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  20. Quantitating Antibody Uptake In Vivo: Conditional Dependence on Antigen Expression Levels

    PubMed Central

    Thurber, Greg M.; Weissleder, Ralph

    2010-01-01

    Purpose Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Procedures Using a cell line with high EpCAM expression and moderate EGFR expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high affinity antibodies. Results As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. Conclusions These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes. PMID:20809210

  1. Quantitating antibody uptake in vivo: conditional dependence on antigen expression levels.

    PubMed

    Thurber, Greg M; Weissleder, Ralph

    2011-08-01

    Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Using a cell line with high epithelial cell adhesion molecule expression and moderate epidermal growth factor receptor expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high-affinity antibodies. As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes.

  2. Quantitative Radiomics System Decoding the Tumor Phenotype | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Our goal is to construct a publicly available computational radiomics system for the objective and automated extraction of quantitative imaging features that we believe will yield biomarkers of greater prognostic value compared with routinely extracted descriptors of tumor size. We will create a generalized, open, portable, and extensible radiomics platform that is widely applicable across cancer types and imaging modalities and describe how we will use lung and head and neck cancers as models to validate our developments.

  3. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  4. Quantitative structure-property relationship modeling of Grätzel solar cell dyes.

    PubMed

    Venkatraman, Vishwesh; Åstrand, Per-Olof; Alsberg, Bjørn Kåre

    2014-01-30

    With fossil fuel reserves on the decline, there is increasing focus on the design and development of low-cost organic photovoltaic devices, in particular, dye-sensitized solar cells (DSSCs). The power conversion efficiency (PCE) of a DSSC is heavily influenced by the chemical structure of the dye. However, as far as we know, no predictive quantitative structure-property relationship models for DSSCs with PCE as one of the response variables have been reported. Thus, we report for the first time the successful application of comparative molecular field analysis (CoMFA) and vibrational frequency-based eigenvalue (EVA) descriptors to model molecular structure-photovoltaic performance relationships for a set of 40 coumarin derivatives. The results show that the models obtained provide statistically robust predictions of important photovoltaic parameters such as PCE, the open-circuit voltage (V(OC)), short-circuit current (J(SC)) and the peak absorption wavelength λ(max). Some of our findings based on the analysis of the models are in accordance with those reported in the literature. These structure-property relationships can be applied to the rational structural design and evaluation of new photovoltaic materials. Copyright © 2013 Wiley Periodicals, Inc.

  5. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model

    PubMed Central

    Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro

    2017-01-01

    In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software “Kongoh” for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1–4 persons’ contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI’s contribution in true contributors and non-contributors by using 2–4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI’s contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples. PMID:29149210

  6. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    PubMed

    Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji

    2017-01-01

    In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  7. A biokinetic model to describe consequences of inhibition/stimulation in DNA-proofreading and repair-1. Development of the model.

    PubMed

    Haschke, H

    2001-10-21

    A biokinetic model is described which deals with the mathematical consequences of the inhibition or stimulation of DNA proofreading. It demonstrates the development of the number of DNA mismatch-dependent cells (e.g. cells with a malignant phenotype), where such mismatches arise by the in situ interaction of various substances with nucleotides of the DNA. The model can test for consequences by a logic gating on an "if-then" type of analysis in relation to the separate and consecutive processes of proofreading and repair. In particular, the consequences are considered in cases where either (i) the efficacy of proofreading and repair are reduced/prevented (inhibited) or (ii) are increased by some form of stimulation. On the chosen kinetic parameters, the model is accessible to manipulation as new data arising from further investigations become available and are introduced. The model is based on recently published data which show that an increased "mutant fraction" (see note on terms) arises in DNA replication when intracellular nucleotide pools show "asymmetries" (see note on terms). Extraordinarily high mutant fractions can be predicted/have been recorded in the presence of proofreading inhibitors. The model expresses data in mathematical terms of the competition between the development of mismatch-dependent cells and those with authentic genetic information. (Feedback and metastasis-effects and those of wild-type replicates are included.) A computerized (numerical) integration of the corresponding set of differential equations is offered. (A diskette with the program CANCER.xls is available upon request.) Copyright 2001 Academic Press

  8. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    NASA Astrophysics Data System (ADS)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  9. Dynamic inundation mapping of Hurricane Harvey flooding in the Houston metro area using hyper-resolution modeling and quantitative image reanalysis

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Lee, J. H.; Lee, S.; Zhang, Y.; Seo, D. J.

    2017-12-01

    Hurricane Harvey was one of the most extreme weather events in Texas history and left significant damages in the Houston and adjoining coastal areas. To understand better the relative impact to urban flooding of extreme amount and spatial extent of rainfall, unique geography, land use and storm surge, high-resolution water modeling is necessary such that natural and man-made components are fully resolved. In this presentation, we reconstruct spatiotemporal evolution of inundation during Hurricane Harvey using hyper-resolution modeling and quantitative image reanalysis. The two-dimensional urban flood model used is based on dynamic wave approximation and 10 m-resolution terrain data, and is forced by the radar-based multisensor quantitative precipitation estimates. The model domain includes Buffalo, Brays, Greens and White Oak Bayous in Houston. The model is simulated using hybrid parallel computing. To evaluate dynamic inundation mapping, we combine various qualitative crowdsourced images and video footages with LiDAR-based terrain data.

  10. Quantitative imaging test approval and biomarker qualification: interrelated but distinct activities.

    PubMed

    Buckler, Andrew J; Bresolin, Linda; Dunnick, N Reed; Sullivan, Daniel C; Aerts, Hugo J W L; Bendriem, Bernard; Bendtsen, Claus; Boellaard, Ronald; Boone, John M; Cole, Patricia E; Conklin, James J; Dorfman, Gary S; Douglas, Pamela S; Eidsaunet, Willy; Elsinger, Cathy; Frank, Richard A; Gatsonis, Constantine; Giger, Maryellen L; Gupta, Sandeep N; Gustafson, David; Hoekstra, Otto S; Jackson, Edward F; Karam, Lisa; Kelloff, Gary J; Kinahan, Paul E; McLennan, Geoffrey; Miller, Colin G; Mozley, P David; Muller, Keith E; Patt, Rick; Raunig, David; Rosen, Mark; Rupani, Haren; Schwartz, Lawrence H; Siegel, Barry A; Sorensen, A Gregory; Wahl, Richard L; Waterton, John C; Wolf, Walter; Zahlmann, Gudrun; Zimmerman, Brian

    2011-06-01

    Quantitative imaging biomarkers could speed the development of new treatments for unmet medical needs and improve routine clinical care. However, it is not clear how the various regulatory and nonregulatory (eg, reimbursement) processes (often referred to as pathways) relate, nor is it clear which data need to be collected to support these different pathways most efficiently, given the time- and cost-intensive nature of doing so. The purpose of this article is to describe current thinking regarding these pathways emerging from diverse stakeholders interested and active in the definition, validation, and qualification of quantitative imaging biomarkers and to propose processes to facilitate the development and use of quantitative imaging biomarkers. A flexible framework is described that may be adapted for each imaging application, providing mechanisms that can be used to develop, assess, and evaluate relevant biomarkers. From this framework, processes can be mapped that would be applicable to both imaging product development and to quantitative imaging biomarker development aimed at increasing the effectiveness and availability of quantitative imaging. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10100800/-/DC1. RSNA, 2011

  11. Optimization of Statistical Methods Impact on Quantitative Proteomics Data.

    PubMed

    Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L

    2015-10-02

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.

  12. A Quantitative Model for the Prediction of Sooting Tendency from Molecular Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St. John, Peter C.; Kairys, Paul; Das, Dhrubajyoti D.

    Particulate matter emissions negatively affect public health and global climate, yet newer fuel-efficient gasoline direct injection engines tend to produce more soot than their port-fuel injection counterparts. Fortunately, the search for sustainable biomass-based fuel blendstocks provides an opportunity to develop fuels that suppress soot formation in more efficient engine designs. However, as emissions tests are experimentally cumbersome and the search space for potential bioblendstocks is vast, new techniques are needed to estimate the sooting tendency of a diverse range of compounds. In this study, we develop a quantitative structure-activity relationship (QSAR) model of sooting tendency based on the experimental yieldmore » sooting index (YSI), which ranks molecules on a scale from n-hexane, 0, to benzene, 100. The model includes a rigorously defined applicability domain, and the predictive performance is checked using both internal and external validation. Model predictions for compounds in the external test set had a median absolute error of ~3 YSI units. An investigation of compounds that are poorly predicted by the model lends new insight into the complex mechanisms governing soot formation. Predictive models of soot formation can therefore be expected to play an increasingly important role in the screening and development of next-generation biofuels.« less

  13. A Quantitative Model for the Prediction of Sooting Tendency from Molecular Structure

    DOE PAGES

    St. John, Peter C.; Kairys, Paul; Das, Dhrubajyoti D.; ...

    2017-07-24

    Particulate matter emissions negatively affect public health and global climate, yet newer fuel-efficient gasoline direct injection engines tend to produce more soot than their port-fuel injection counterparts. Fortunately, the search for sustainable biomass-based fuel blendstocks provides an opportunity to develop fuels that suppress soot formation in more efficient engine designs. However, as emissions tests are experimentally cumbersome and the search space for potential bioblendstocks is vast, new techniques are needed to estimate the sooting tendency of a diverse range of compounds. In this study, we develop a quantitative structure-activity relationship (QSAR) model of sooting tendency based on the experimental yieldmore » sooting index (YSI), which ranks molecules on a scale from n-hexane, 0, to benzene, 100. The model includes a rigorously defined applicability domain, and the predictive performance is checked using both internal and external validation. Model predictions for compounds in the external test set had a median absolute error of ~3 YSI units. An investigation of compounds that are poorly predicted by the model lends new insight into the complex mechanisms governing soot formation. Predictive models of soot formation can therefore be expected to play an increasingly important role in the screening and development of next-generation biofuels.« less

  14. Quantitative analysis of breast cancer diagnosis using a probabilistic modelling approach.

    PubMed

    Liu, Shuo; Zeng, Jinshu; Gong, Huizhou; Yang, Hongqin; Zhai, Jia; Cao, Yi; Liu, Junxiu; Luo, Yuling; Li, Yuhua; Maguire, Liam; Ding, Xuemei

    2018-01-01

    Breast cancer is the most prevalent cancer in women in most countries of the world. Many computer-aided diagnostic methods have been proposed, but there are few studies on quantitative discovery of probabilistic dependencies among breast cancer data features and identification of the contribution of each feature to breast cancer diagnosis. This study aims to fill this void by utilizing a Bayesian network (BN) modelling approach. A K2 learning algorithm and statistical computation methods are used to construct BN structure and assess the obtained BN model. The data used in this study were collected from a clinical ultrasound dataset derived from a Chinese local hospital and a fine-needle aspiration cytology (FNAC) dataset from UCI machine learning repository. Our study suggested that, in terms of ultrasound data, cell shape is the most significant feature for breast cancer diagnosis, and the resistance index presents a strong probabilistic dependency on blood signals. With respect to FNAC data, bare nuclei are the most important discriminating feature of malignant and benign breast tumours, and uniformity of both cell size and cell shape are tightly interdependent. The BN modelling approach can support clinicians in making diagnostic decisions based on the significant features identified by the model, especially when some other features are missing for specific patients. The approach is also applicable to other healthcare data analytics and data modelling for disease diagnosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies

    PubMed Central

    Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A

    2013-01-01

    A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026

  16. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data

    PubMed Central

    Gritsenko, Alexey A.; Hulsman, Marc; Reinders, Marcel J. T.; de Ridder, Dick

    2015-01-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates. PMID:26275099

  17. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data.

    PubMed

    Gritsenko, Alexey A; Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick

    2015-08-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates.

  18. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    PubMed Central

    Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.

    2014-01-01

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  19. Quantitative tomographic imaging of intermolecular FRET in small animals

    PubMed Central

    Venugopal, Vivek; Chen, Jin; Barroso, Margarida; Intes, Xavier

    2012-01-01

    Forster resonance energy transfer (FRET) is a nonradiative transfer of energy between two fluorescent molecules (a donor and an acceptor) in nanometer range proximity. FRET imaging methods have been applied to proteomic studies and drug discovery applications based on intermolecular FRET efficiency measurements and stoichiometric measurements of FRET interaction as quantitative parameters of interest. Importantly, FRET provides information about biomolecular interactions at a molecular level, well beyond the diffraction limits of standard microscopy techniques. The application of FRET to small animal imaging will allow biomedical researchers to investigate physiological processes occurring at nanometer range in vivo as well as in situ. In this work a new method for the quantitative reconstruction of FRET measurements in small animals, incorporating a full-field tomographic acquisition system with a Monte Carlo based hierarchical reconstruction scheme, is described and validated in murine models. Our main objective is to estimate the relative concentration of two forms of donor species, i.e., a donor molecule involved in FRETing to an acceptor close by and a nonFRETing donor molecule. PMID:23243567

  20. Quantitative Literacy: Geosciences and Beyond

    NASA Astrophysics Data System (ADS)

    Richardson, R. M.; McCallum, W. G.

    2002-12-01

    Quantitative literacy seems like such a natural for the geosciences, right? The field has gone from its origin as a largely descriptive discipline to one where it is hard to imagine failing to bring a full range of mathematical tools to the solution of geological problems. Although there are many definitions of quantitative literacy, we have proposed one that is analogous to the UNESCO definition of conventional literacy: "A quantitatively literate person is one who, with understanding, can both read and represent quantitative information arising in his or her everyday life." Central to this definition is the concept that a curriculum for quantitative literacy must go beyond the basic ability to "read and write" mathematics and develop conceptual understanding. It is also critical that a curriculum for quantitative literacy be engaged with a context, be it everyday life, humanities, geoscience or other sciences, business, engineering, or technology. Thus, our definition works both within and outside the sciences. What role do geoscience faculty have in helping students become quantitatively literate? Is it our role, or that of the mathematicians? How does quantitative literacy vary between different scientific and engineering fields? Or between science and nonscience fields? We will argue that successful quantitative literacy curricula must be an across-the-curriculum responsibility. We will share examples of how quantitative literacy can be developed within a geoscience curriculum, beginning with introductory classes for nonmajors (using the Mauna Loa CO2 data set) through graduate courses in inverse theory (using singular value decomposition). We will highlight six approaches to across-the curriculum efforts from national models: collaboration between mathematics and other faculty; gateway testing; intensive instructional support; workshops for nonmathematics faculty; quantitative reasoning requirement; and individual initiative by nonmathematics faculty.

  1. Transcriptome discovery in non-model wild fish species for the development of quantitative transcript abundance assays

    USGS Publications Warehouse

    Hahn, Cassidy M.; Iwanowicz, Luke R.; Cornman, Robert S.; Mazik, Patricia M.; Blazer, Vicki S.

    2016-01-01

    Environmental studies increasingly identify the presence of both contaminants of emerging concern (CECs) and legacy contaminants in aquatic environments; however, the biological effects of these compounds on resident fishes remain largely unknown. High throughput methodologies were employed to establish partial transcriptomes for three wild-caught, non-model fish species; smallmouth bass (Micropterus dolomieu), white sucker (Catostomus commersonii) and brown bullhead (Ameiurus nebulosus). Sequences from these transcriptome databases were utilized in the development of a custom nCounter CodeSet that allowed for direct multiplexed measurement of 50 transcript abundance endpoints in liver tissue. Sequence information was also utilized in the development of quantitative real-time PCR (qPCR) primers. Cross-species hybridization allowed the smallmouth bass nCounter CodeSet to be used for quantitative transcript abundance analysis of an additional non-model species, largemouth bass (Micropterus salmoides). We validated the nCounter analysis data system with qPCR for a subset of genes and confirmed concordant results. Changes in transcript abundance biomarkers between sexes and seasons were evaluated to provide baseline data on transcript modulation for each species of interest.

  2. Conducting quantitative synthesis when comparing medical interventions: AHRQ and the Effective Health Care Program.

    PubMed

    Fu, Rongwei; Gartlehner, Gerald; Grant, Mark; Shamliyan, Tatyana; Sedrakyan, Art; Wilt, Timothy J; Griffith, Lauren; Oremus, Mark; Raina, Parminder; Ismaila, Afisi; Santaguida, Pasqualina; Lau, Joseph; Trikalinos, Thomas A

    2011-11-01

    This article is to establish recommendations for conducting quantitative synthesis, or meta-analysis, using study-level data in comparative effectiveness reviews (CERs) for the Evidence-based Practice Center (EPC) program of the Agency for Healthcare Research and Quality. We focused on recurrent issues in the EPC program and the recommendations were developed using group discussion and consensus based on current knowledge in the literature. We first discussed considerations for deciding whether to combine studies, followed by discussions on indirect comparison and incorporation of indirect evidence. Then, we described our recommendations on choosing effect measures and statistical models, giving special attention to combining studies with rare events; and on testing and exploring heterogeneity. Finally, we briefly presented recommendations on combining studies of mixed design and on sensitivity analysis. Quantitative synthesis should be conducted in a transparent and consistent way. Inclusion of multiple alternative interventions in CERs increases the complexity of quantitative synthesis, whereas the basic issues in quantitative synthesis remain crucial considerations in quantitative synthesis for a CER. We will cover more issues in future versions and update and improve recommendations with the accumulation of new research to advance the goal for transparency and consistency. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Novel Uses of In Vitro Data to Develop Quantitative Biological Activity Relationship Models for in Vivo Carcinogenicity Prediction.

    PubMed

    Pradeep, Prachi; Povinelli, Richard J; Merrill, Stephen J; Bozdag, Serdar; Sem, Daniel S

    2015-04-01

    The availability of large in vitro datasets enables better insight into the mode of action of chemicals and better identification of potential mechanism(s) of toxicity. Several studies have shown that not all in vitro assays can contribute as equal predictors of in vivo carcinogenicity for development of hybrid Quantitative Structure Activity Relationship (QSAR) models. We propose two novel approaches for the use of mechanistically relevant in vitro assay data in the identification of relevant biological descriptors and development of Quantitative Biological Activity Relationship (QBAR) models for carcinogenicity prediction. We demonstrate that in vitro assay data can be used to develop QBAR models for in vivo carcinogenicity prediction via two case studies corroborated with firm scientific rationale. The case studies demonstrate the similarities between QBAR and QSAR modeling in: (i) the selection of relevant descriptors to be used in the machine learning algorithm, and (ii) the development of a computational model that maps chemical or biological descriptors to a toxic endpoint. The results of both the case studies show: (i) improved accuracy and sensitivity which is especially desirable under regulatory requirements, and (ii) overall adherence with the OECD/REACH guidelines. Such mechanism based models can be used along with QSAR models for prediction of mechanistically complex toxic endpoints. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Quantitative Characterization of Spurious Gibbs Waves in 45 CMIP5 Models

    NASA Astrophysics Data System (ADS)

    Geil, K. L.; Zeng, X.

    2014-12-01

    Gibbs oscillations appear in global climate models when representing fields, such as orography, that contain discontinuities or sharp gradients. It has been known for decades that the oscillations are associated with the transformation of the truncated spectral representation of a field to physical space and that the oscillations can also be present in global models that do not use spectral methods. The spurious oscillations are potentially detrimental to model simulations (e.g., over ocean) and this work provides a quantitative characterization of the Gibbs oscillations that appear across the Coupled Model Intercomparison Project Phase 5 (CMIP5) models. An ocean transect running through the South Pacific High toward the Andes is used to characterize the oscillations in ten different variables. These oscillations are found to be stationary and hence are not caused by (physical) waves in the atmosphere. We quantify the oscillation amplitude using the root mean square difference (RMSD) between the transect of a variable and its running mean (rather than the constant mean across the transect). We also compute the RMSD to interannual variability (IAV) ratio, which provides a relative measure of the oscillation amplitude. Of the variables examined, the largest RMSD values exist in the surface pressure field of spectral models, while the smallest RMSD values within the surface pressure field come from models that use finite difference (FD) techniques. Many spectral models have a surface pressure RMSD that is 2 to 15 times greater than IAV over the transect and an RMSD:IAV ratio greater than one for many other variables including surface temperature, incoming shortwave radiation at the surface, incoming longwave radiation at the surface, and total cloud fraction. In general, the FD models out-perform the spectral models, but not all the spectral models have large amplitude oscillations and there are a few FD models where the oscillations do appear. Finally, we present a

  5. Spiraling between qualitative and quantitative data on women's health behaviors: a double helix model for mixed methods.

    PubMed

    Mendlinger, Sheryl; Cwikel, Julie

    2008-02-01

    A double helix spiral model is presented which demonstrates how to combine qualitative and quantitative methods of inquiry in an interactive fashion over time. Using findings on women's health behaviors (e.g., menstruation, breast-feeding, coping strategies), we show how qualitative and quantitative methods highlight the theory of knowledge acquisition in women's health decisions. A rich data set of 48 semistructured, in-depth ethnographic interviews with mother-daughter dyads from six ethnic groups (Israeli, European, North African, Former Soviet Union [FSU], American/Canadian, and Ethiopian), plus seven focus groups, provided the qualitative sources for analysis. This data set formed the basis of research questions used in a quantitative telephone survey of 302 Israeli women from the ages of 25 to 42 from four ethnic groups. We employed multiple cycles of data analysis from both data sets to produce a more detailed and multidimensional picture of women's health behavior decisions through a spiraling process.

  6. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    EPA Science Inventory

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  7. A plant-wide aqueous phase chemistry module describing pH variations and ion speciation/pairing in wastewater treatment process models.

    PubMed

    Flores-Alsina, Xavier; Kazadi Mbamba, Christian; Solon, Kimberly; Vrecko, Darko; Tait, Stephan; Batstone, Damien J; Jeppsson, Ulf; Gernaey, Krist V

    2015-11-15

    There is a growing interest within the Wastewater Treatment Plant (WWTP) modelling community to correctly describe physico-chemical processes after many years of mainly focusing on biokinetics. Indeed, future modelling needs, such as a plant-wide phosphorus (P) description, require a major, but unavoidable, additional degree of complexity when representing cationic/anionic behaviour in Activated Sludge (AS)/Anaerobic Digestion (AD) systems. In this paper, a plant-wide aqueous phase chemistry module describing pH variations plus ion speciation/pairing is presented and interfaced with industry standard models. The module accounts for extensive consideration of non-ideality, including ion activities instead of molar concentrations and complex ion pairing. The general equilibria are formulated as a set of Differential Algebraic Equations (DAEs) instead of Ordinary Differential Equations (ODEs) in order to reduce the overall stiffness of the system, thereby enhancing simulation speed. Additionally, a multi-dimensional version of the Newton-Raphson algorithm is applied to handle the existing multiple algebraic inter-dependencies. The latter is reinforced with the Simulated Annealing method to increase the robustness of the solver making the system not so dependent of the initial conditions. Simulation results show pH predictions when describing Biological Nutrient Removal (BNR) by the activated sludge models (ASM) 1, 2d and 3 comparing the performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) treatment plant configuration under different anaerobic/anoxic/aerobic conditions. The same framework is implemented in the Benchmark Simulation Model No. 2 (BSM2) version of the Anaerobic Digestion Model No. 1 (ADM1) (WWTP3) as well, predicting pH values at different cationic/anionic loads. In this way, the general applicability/flexibility of the proposed approach is demonstrated, by implementing the aqueous phase chemistry module in some

  8. Quantitative structure-activity relationship modeling of rat acute toxicity by oral exposure.

    PubMed

    Zhu, Hao; Martin, Todd M; Ye, Lin; Sedykh, Alexander; Young, Douglas M; Tropsha, Alexander

    2009-12-01

    Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.

  9. Transforming Boolean models to continuous models: methodology and application to T-cell receptor signaling

    PubMed Central

    Wittmann, Dominik M; Krumsiek, Jan; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Klamt, Steffen; Theis, Fabian J

    2009-01-01

    Background The understanding of regulatory and signaling networks has long been a core objective in Systems Biology. Knowledge about these networks is mainly of qualitative nature, which allows the construction of Boolean models, where the state of a component is either 'off' or 'on'. While often able to capture the essential behavior of a network, these models can never reproduce detailed time courses of concentration levels. Nowadays however, experiments yield more and more quantitative data. An obvious question therefore is how qualitative models can be used to explain and predict the outcome of these experiments. Results In this contribution we present a canonical way of transforming Boolean into continuous models, where the use of multivariate polynomial interpolation allows transformation of logic operations into a system of ordinary differential equations (ODE). The method is standardized and can readily be applied to large networks. Other, more limited approaches to this task are briefly reviewed and compared. Moreover, we discuss and generalize existing theoretical results on the relation between Boolean and continuous models. As a test case a logical model is transformed into an extensive continuous ODE model describing the activation of T-cells. We discuss how parameters for this model can be determined such that quantitative experimental results are explained and predicted, including time-courses for multiple ligand concentrations and binding affinities of different ligands. This shows that from the continuous model we may obtain biological insights not evident from the discrete one. Conclusion The presented approach will facilitate the interaction between modeling and experiments. Moreover, it provides a straightforward way to apply quantitative analysis methods to qualitatively described systems. PMID:19785753

  10. Quantitative structure-toxicity relationship (QSTR) studies on the organophosphate insecticides.

    PubMed

    Can, Alper

    2014-11-04

    Organophosphate insecticides are the most commonly used pesticides in the world. In this study, quantitative structure-toxicity relationship (QSTR) models were derived for estimating the acute oral toxicity of organophosphate insecticides to male rats. The 20 chemicals of the training set and the seven compounds of the external testing set were described by means of using descriptors. Descriptors for lipophilicity, polarity and molecular geometry, as well as quantum chemical descriptors for energy were calculated. Model development to predict toxicity of organophosphate insecticides in different matrices was carried out using multiple linear regression. The model was validated internally and externally. In the present study, QSTR model was used for the first time to understand the inherent relationships between the organophosphate insecticide molecules and their toxicity behavior. Such studies provide mechanistic insight about structure-toxicity relationship and help in the design of less toxic insecticides. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. A simple polymeric model describes cell nuclear mechanical response

    NASA Astrophysics Data System (ADS)

    Banigan, Edward; Stephens, Andrew; Marko, John

    The cell nucleus must continually resist inter- and intracellular mechanical forces, and proper mechanical response is essential to basic cell biological functions as diverse as migration, differentiation, and gene regulation. Experiments probing nuclear mechanics reveal that the nucleus stiffens under strain, leading to two characteristic regimes of force response. This behavior depends sensitively on the intermediate filament protein lamin A, which comprises the outer layer of the nucleus, and the properties of the chromatin interior. To understand these mechanics, we study a simulation model of a polymeric shell encapsulating a semiflexible polymer. This minimalistic model qualitatively captures the typical experimental nuclear force-extension relation and observed nuclear morphologies. Using a Flory-like theory, we explain the simulation results and mathematically estimate the force-extension relation. The model and experiments suggest that chromatin organization is a dominant contributor to nuclear mechanics, while the lamina protects cell nuclei from large deformations.

  12. Numerical modeling of flow focusing: Quantitative characterization of the flow regimes

    NASA Astrophysics Data System (ADS)

    Mamet, V.; Namy, P.; Dedulle, J.-M.

    2017-09-01

    Among droplet generation technologies, the flow focusing technique is a major process due to its control, stability, and reproducibility. In this process, one fluid (the continuous phase) interacts with another one (the dispersed phase) to create small droplets. Experimental assays in the literature on gas-liquid flow focusing have shown that different jet regimes can be obtained depending on the operating conditions. However, the underlying physical phenomena remain unclear, especially mechanical interactions between the fluids and the oscillation phenomenon of the liquid. In this paper, based on published studies, a numerical diphasic model has been developed to take into consideration the mechanical interaction between phases, using the Cahn-Hilliard method to monitor the interface. Depending on the liquid/gas inputs and the geometrical parameters, various regimes can be obtained, from a steady state regime to an unsteady one with liquid oscillation. In the dispersed phase, the model enables us to compute the evolution of fluid flow, both in space (size of the recirculation zone) and in time (period of oscillation). The transition between unsteady and stationary regimes is assessed in relation to liquid and gas dimensionless numbers, showing the existence of critical thresholds. This model successfully highlights, qualitatively and quantitatively, the influence of the geometry of the nozzle, in particular, its inner diameter.

  13. Quantitative image quality evaluation of MR images using perceptual difference models

    PubMed Central

    Miao, Jun; Huo, Donglai; Wilson, David L.

    2008-01-01

    The authors are using a perceptual difference model (Case-PDM) to quantitatively evaluate image quality of the thousands of test images which can be created when optimizing fast magnetic resonance (MR) imaging strategies and reconstruction techniques. In this validation study, they compared human evaluation of MR images from multiple organs and from multiple image reconstruction algorithms to Case-PDM and similar models. The authors found that Case-PDM compared very favorably to human observers in double-stimulus continuous-quality scale and functional measurement theory studies over a large range of image quality. The Case-PDM threshold for nonperceptible differences in a 2-alternative forced choice study varied with the type of image under study, but was ≈1.1 for diffuse image effects, providing a rule of thumb. Ordering the image quality evaluation models, we found in overall Case-PDM ≈ IDM (Sarnoff Corporation) ≈ SSIM [Wang et al. IEEE Trans. Image Process. 13, 600–612 (2004)] > mean squared error ≈ NR [Wang et al. (2004) (unpublished)] > DCTune (NASA) > IQM (MITRE Corporation). The authors conclude that Case-PDM is very useful in MR image evaluation but that one should probably restrict studies to similar images and similar processing, normally not a limitation in image reconstruction studies. PMID:18649487

  14. q-deformed Einstein's model to describe specific heat of solid

    NASA Astrophysics Data System (ADS)

    Guha, Atanu; Das, Prasanta Kumar

    2018-04-01

    Realistic phenomena can be described more appropriately using generalized canonical ensemble, with proper parameter sets involved. We have generalized the Einstein's theory for specific heat of solid in Tsallis statistics, where the temperature fluctuation is introduced into the theory via the fluctuation parameter q. At low temperature the Einstein's curve of the specific heat in the nonextensive Tsallis scenario exactly lies on the experimental data points. Consequently this q-modified Einstein's curve is found to be overlapping with the one predicted by Debye. Considering only the temperature fluctuation effect(even without considering more than one mode of vibration is being triggered) we found that the CV vs T curve is as good as obtained by considering the different modes of vibration as suggested by Debye. Generalizing the Einstein's theory in Tsallis statistics we found that a unique value of the Einstein temperature θE along with a temperature dependent deformation parameter q(T) , can well describe the phenomena of specific heat of solid i.e. the theory is equivalent to Debye's theory with a temperature dependent θD.

  15. Describing Technological Paradigm Transitions: A Methodological Exploration.

    ERIC Educational Resources Information Center

    Wallace, Danny P.; Van Fleet, Connie

    1997-01-01

    Presents a humorous treatment of the "sessio taurino" (or humanistic inquiry) technique for describing changes in technological models. The fundamental tool of "sessio taurino" is a loosely-structured event known as the session, which is of indeterminate length, involves a flexible number of participants, and utilizes a…

  16. An exercise to teach quantitative analysis and modeling using Excel-based analysis of the carbon cycle in the anthropocene

    NASA Astrophysics Data System (ADS)

    Stoll, Heather

    2013-04-01

    A computer modeling exercise was created to allows students to investigate the consequences of fossil fuel burning and land use change on the amount of carbon dioxide in the atmosphere. Students work with a simple numerical model of the carbon cycle which is rendered in Excel, and conduct a set of different sensitivity tests with different amounts and rate of C additions, and then graph and discuss their results. In the recommended approach, the model is provided to students without the biosphere and in class the formulas to integrate this module are typed into Excel simultaneously by instructor and students, helping students understand how the larger model is set up. In terms of content, students learn to recognize the redistribution of fossil fuel carbon between the ocean and atmosphere, and distinguish the consequences of rapid vs slow rates of addition of fossil fuel CO2 and the reasons for this difference. Students become familiar with the use of formulas in Excel and working with a large (300 rows, 20 columns) worksheet and gain competence in graphical representation of multiple scenarios. Students learn to appreciate the power and limitations of numerical models of complex cycles, the concept of inverse and forward models, and sensitivity tests. Finally, students learn that a reasonable hypothesis, may be "reasonable" but still not quantitatively sufficient - in this case, that the "Industrial Revolution" was not the source of increasing atmospheric CO2 from 1750-1900. The described activity is available to educators on the Teach the Earth portal of the Science Education Research Center (SERC) http://serc.carleton.edu/quantskills/activities/68751.html.

  17. Electric Field Quantitative Measurement System and Method

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2016-01-01

    A method and system are provided for making a quantitative measurement of an electric field. A plurality of antennas separated from one another by known distances are arrayed in a region that extends in at least one dimension. A voltage difference between at least one selected pair of antennas is measured. Each voltage difference is divided by the known distance associated with the selected pair of antennas corresponding thereto to generate a resulting quantity. The plurality of resulting quantities defined over the region quantitatively describe an electric field therein.

  18. A simple model of universe describing the early inflation and the late accelerated expansion in a symmetric manner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavanis, Pierre-Henri

    We construct a simple model of universe which 'unifies' vacuum energy and radiation on the one hand, and matter and dark energy on the other hand in the spirit of a generalized Chaplygin gas model. Specifically, the phases of early inflation and late accelerated expansion are described by a generalized equation of state p/c{sup 2} = αρ+kρ{sup 1+1/n} having a linear component p = αρc{sup 2} and a polytropic component p = kρ{sup 1+1/n}c{sup 2}. For α= 1/3, n= 1 and k=−4/(3ρ{sub P}), where ρ{sub P}= 5.1610{sup 99} g/m{sup 3} is the Planck density, this equation of state describes themore » transition between the vacuum energy era and the radiation era. For t≥ 0, the universe undergoes an inflationary expansion that brings it from the Planck size l{sub P}= 1.6210{sup −35} m to a size a{sub 1}= 2.6110{sup −6} m on a timescale of about 23.3 Planck times t{sub P}= 5.3910{sup −44} s (early inflation). When t > t{sub 1}= 23.3t{sub P}, the universe decelerates and enters in the radiation era. We interpret the transition from the vacuum energy era to the radiation era as a second order phase transition where the Planck constant ℏ plays the role of finite size effects (the standard Big Bang theory is recovered for ℏ= 0). For α= 0, n=−1 and k=−ρ{sub Λ}, where ρ{sub Λ}= 7.0210{sup −24} g/m{sup 3} is the cosmological density, the equation of state p/c{sup 2} = αρ+kρ{sup 1+1/n} describes the transition from a decelerating universe dominated by pressureless matter (baryonic and dark matter) to an accelerating universe dominated by dark energy (late inflation). This transition takes place at a size a{sub 2}= 0.204l{sub Λ}. corresponding to a time t{sub 2}= 0.203t{sub Λ} where l{sub Λ}= 4.38 10{sup 26} m is the cosmological length and t{sub Λ}= 1.46 10{sup 18} s the cosmological time. The present universe turns out to be just at the transition between these two periods (t{sub 0}∼t{sub 2}). Our model gives the same results as the standard

  19. Propagating mass accretion rate fluctuations in black hole X-ray binaries: quantitative tests

    NASA Astrophysics Data System (ADS)

    Rapisarda, S.; Ingram, A.; van der Klis, M.

    2017-10-01

    Over the past 20 years, a consistent phenomenology has been established to describe the variability properties of Black Hole X-ray Binaries (BHBs). However, the physics behind the observational data is still poorly understood. The recently proposed model PROPFLUC assumes a truncated disc/hot inner flow geometry, with mass accretion rate fluctuations propagating through a precessing inner flow. These two processes give rise respectively to broad band variability and QPO. Because of propagation, the emission from different regions of the disc/hot flow geometry is correlated. In our study we applied the model PROPFLUC on different BHBs (including XTE J1550-564 and Cygnus X-1) in different spectral states, fitting jointly the power spectra in two energy bands and the cross-spectrum between these two bands. This represents the first study to utilize quantitive fitting of a physical model simultaneously to observed power and cross-spectra. For the case of XTE J1550-564, which displays a strong QPO, we found quantitative and qualitative discrepancies between model predictions and data, whereas we find a good fit for the Cygnus X-1 data, which does not display a QPO. We conclude that the discrepancies are generic to the propagating fluctuations paradigm, and may be related to the mechanism originating the QPO.

  20. Mechanistic quantitative structure-activity relationship model for the photoinduced toxicity of polycyclic aromatic hydrocarbons. 1: Physical model based on chemical kinetics in a two-compartment system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krylov, S.N.; Huang, X.D.; Zeiler, L.F.

    1997-11-01

    A quantitative structure-activity relationship model for the photoinduced toxicity of 16 polycyclic aromatic hydrocarbons (PAHs) to duckweed (Lemna gibba) in simulated solar radiation (SSR) was developed. Lemna gibba was chosen for this study because toxicity could be considered in two compartments: water column and leaf tissue. Modeling of photoinduced toxicity was described by photochemical reactions between PAHs and a hypothetical group of endogenous biomolecules (G) required for normal growth, with damage to G by PAHs and/or photomodified PAHs in SSR resulting in impaired growth. The reaction scheme includes photomodification of PAHs, uptake of PAHs into leaves, triplet-state formation of intactmore » PAHs, photosensitization reactions that damage G, and reactions between photomodified PAHs and G. The assumptions used were: the PAH photomodification rate is slower than uptake of chemicals into leaves, the PAH concentration in aqueous solution is nearly constant during a toxicity test, the fluence rate of actinic radiation is lower within leaves than in the aqueous phase, and the toxicity of intact PAHs in the dark is negligible. A series of differential equations describing the reaction kinetics of intact and photomodifed PAHs with G was derived. The resulting equation for PAH toxicity was a function of treatment period, initial PAH concentration, relative absorbance of SSR by each PAH, quantum yield for formation of triplet-state PAH, and rate of PAH photomodification. Data for growth in the presence of intact and photomodified PAHs were used to empirically solve for a photosensitization constant (PSC) and a photomodification constant (PMC) for each of the 16 PAHs tested. For 9 PAHs the PMC dominates and for 7 PAHs the PSC dominates.« less