Science.gov

Sample records for existing models quantitatively

  1. Comment on "Can existing models quantitatively describe the mixing behavior of acetone with water" [J. Chem. Phys. 130, 124516 (2009)].

    PubMed

    Kang, Myungshim; Perera, Aurelien; Smith, Paul E

    2009-10-21

    A recent publication indicated that simulations of acetone-water mixtures using the KBFF model for acetone indicate demixing at mole fractions less than 0.28 of acetone, in disagreement with experiment and two previously published studies. Here, we indicate some inconsistancies in the current study which could help to explain these differences. PMID:20568888

  2. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  3. Comparative analysis of existing disinfection models.

    PubMed

    Andrianarison, T; Jupsin, H; Ouali, A; Vasel, J-L

    2010-01-01

    For a long time Marais's model has been the main tool for disinfection prediction in waste stabilization ponds (WSPs), although various authors have developed other disinfection models. Some ten other empirical models have been listed over the past fifteen years. Unfortunately, their predictions of disinfection in a given pond are very different. The existing models are too empirical to give reliable predictions: often their explanatory variables were chosen arbitrarily. In this work, we try to demonstrate that if influent variables have daily variations, the use of their average values in simulations may overestimate the disinfection effect. New methods are thus needed to provide better fittings of the models. Better knowledge of the mechanisms involved is needed to improve disinfection models. PMID:20182074

  4. Generating Navigation Models from Existing Building Data

    NASA Astrophysics Data System (ADS)

    Liu, L.; Zlatanova, S.

    2013-11-01

    Research on indoor navigation models mainly focuses on geometric and logical models .The models are enriched with specific semantic information which supports localisation, navigation and guidance. Geometric models provide information about the structural (physical) distribution of spaces in a building, while logical models indicate relationships (connectivity and adjacency) between the spaces. In many cases geometric models contain virtual subdivisions to identify smaller spaces which are of interest for navigation (e.g. reception area) or make use of different semantics. The geometric models are used as basis to automatically derive logical models. However, there is seldom reported research on how to automatically realize such geometric models from existing building data (as floor plans) or indoor standards (CityGML LOD4 or IFC). In this paper, we present our experiments on automatic creation of logical models from floor plans and CityGML LOD4. For the creation we adopt the Indoor Spatial Navigation Model (INSM) which is specifically designed to support indoor navigation. The semantic concepts in INSM differ from daily used notations of indoor spaces such as rooms and corridors but they facilitate automatic creation of logical models.

  5. Modeling Truth Existence in Truth Discovery

    PubMed Central

    Zhi, Shi; Zhao, Bo; Tong, Wenzhu; Gao, Jing; Yu, Dian; Ji, Heng; Han, Jiawei

    2015-01-01

    When integrating information from multiple sources, it is common to encounter conflicting answers to the same question. Truth discovery is to infer the most accurate and complete integrated answers from conflicting sources. In some cases, there exist questions for which the true answers are excluded from the candidate answers provided by all sources. Without any prior knowledge, these questions, named no-truth questions, are difficult to be distinguished from the questions that have true answers, named has-truth questions. In particular, these no-truth questions degrade the precision of the answer integration system. We address such a challenge by introducing source quality, which is made up of three fine-grained measures: silent rate, false spoken rate and true spoken rate. By incorporating these three measures, we propose a probabilistic graphical model, which simultaneously infers truth as well as source quality without any a priori training involving ground truth answers. Moreover, since inferring this graphical model requires parameter tuning of the prior of truth, we propose an initialization scheme based upon a quantity named truth existence score, which synthesizes two indicators, namely, participation rate and consistency rate. Compared with existing methods, our method can effectively filter out no-truth questions, which results in more accurate source quality estimation. Consequently, our method provides more accurate and complete answers to both has-truth and no-truth questions. Experiments on three real-world datasets illustrate the notable advantage of our method over existing state-of-the-art truth discovery methods. PMID:26705507

  6. LDEF data: Comparisons with existing models

    NASA Technical Reports Server (NTRS)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-01-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  7. LDEF data: Comparisons with existing models

    NASA Astrophysics Data System (ADS)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-04-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  8. Interpreting snowpack radiometry using currently existing microwave radiative transfer models

    NASA Astrophysics Data System (ADS)

    Kang, Do-Hyuk; Tang, Shurun; Kim, Edward J.

    2015-10-01

    A radiative transfer model (RTM) to calculate the snow brightness temperatures (Tb) is a critical element in terrestrial snow parameter retrieval from microwave remote sensing observations. The RTM simulates the Tb based on a layered snow by solving a set of microwave radiative transfer equations. Even with the same snow physical inputs to drive the RTM, currently existing models such as Microwave Emission Model of Layered Snowpacks (MEMLS), Dense Media Radiative Transfer (DMRT-QMS), and Helsinki University of Technology (HUT) models produce different Tb responses. To backwardly invert snow physical properties from the Tb, differences from RTMs are first to be quantitatively explained. To this end, this initial investigation evaluates the sources of perturbations in these RTMs, and reveals the equations where the variations are made among the three models. Modelling experiments are conducted by providing the same but gradual changes in snow physical inputs such as snow grain size, and snow density to the 3 RTMs. Simulations are conducted with the frequencies consistent with the Advanced Microwave Scanning Radiometer- E (AMSR-E) at 6.9, 10.7, 18.7, 23.8, 36.5, and 89.0 GHz. For realistic simulations, the 3 RTMs are simultaneously driven by the same snow physics model with the meteorological forcing datasets and are validated against the snow insitu samplings from the CLPX (Cold Land Processes Field Experiment) 2002-2003, and NoSREx (Nordic Snow Radar Experiment) 2009-2010.

  9. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  10. Interpreting snowpack radiometry using currently existing microwave radiative transfer models

    NASA Astrophysics Data System (ADS)

    Kang, D. H.; Tan, S.; Kim, E. J.

    2015-12-01

    A radiative transfer model (RTM) to calculate a snow brightness temperature (Tb) is a critical element to retrieve terrestrial snow from microwave remote sensing observations. The RTM simulates the Tb based on a layered snow by solving a set of microwave radiative transfer formulas. Even with the same snow physical inputs used for the RTM, currently existing models such as Microwave Emission Model of Layered Snowpacks (MEMLS), Dense Media Radiative Transfer (DMRT-Tsang), and Helsinki University of Technology (HUT) models produce different Tb responses. To backwardly invert snow physical properties from the Tb, the differences from the RTMs are to be quantitatively explained. To this end, the paper evaluates the sources of perturbations in the RTMs, and reveals the equations where the variations are made among three models. Investigations are conducted by providing the same but gradual changes in snow physical inputs such as snow grain size, and snow density to the 3 RTMs. Simulations are done with the frequencies consistent with the Advanced Microwave Scanning Radiometer-E (AMSR-E) at 6.9, 10.7, 18.7, 23.8, 36.5, and 89.0 GHz. For realistic simulations, the 3 RTMs are simultaneously driven by the same snow physics model with the meteorological forcing datasets and are validated from the snow core samplings from the CLPX (Cold Land Processes Field Experiment) 2002-2003, and NoSREx (Nordic Snow Radar Experiment) 2009-2010.

  11. The mathematics of cancer: integrating quantitative models.

    PubMed

    Altrock, Philipp M; Liu, Lin L; Michor, Franziska

    2015-12-01

    Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology. PMID:26597528

  12. Performance evaluation of ExiStation HBV diagnostic system for hepatitis B virus DNA quantitation.

    PubMed

    Cha, Young Joo; Yoo, Soo Jin; Sohn, Yong-Hak; Kim, Hyun Soo

    2013-11-01

    The performance of a recently developed real-time PCR system, the ExiStation HBV diagnostic system, for quantitation of hepatitis B virus (HBV) in human blood was evaluated. The detection limit, reproducibility, cross-reactivity, and interference were evaluated as measures of analytical performance. For the comparison study, 100 HBV-positive blood samples and 100 HBV-negative samples from Korean Blood Bank Serum were used, and the results of the ExiStation HBV system showed good correlation with those obtained using the Cobas TaqMan (r2=0.9931) and Abbott real-time PCR systems (r2=0.9894). The lower limit of detection was measured as 9.55 IU/mL using WHO standards and the dynamic range was linear from 6.68 to 6.68×10(9) IU/mL using cloned plasmids. The within-run coefficient of variation (CV) was 9.4%, 2.1%, and 1.1%, and the total CV was 11.8%, 3.6%, and 1.7% at a concentration of 1.92 log10 IU/mL, 3.88 log10 IU/mL, and 6.84 log10 IU/mL, respectively. No cross-reactivity or interference was detected. The ExiStation HBV diagnostic system showed satisfactory analytical sensitivity, excellent reproducibility, no cross-reactivity, no interference, and high agreement with the Cobas TaqMan and Abbott real-time PCR systems, and is therefore a useful tool for the detection and monitoring of HBV infection. PMID:23892129

  13. Quantitative modeling of planetary magnetospheric magnetic fields

    NASA Technical Reports Server (NTRS)

    Walker, R. J.

    1979-01-01

    Three new quantitative models of the earth's magnetospheric magnetic field have recently been presented: the Olson-Pfitzer model, the Tsyganenko model, and the Voigt model. The paper reviews these models in some detail with emphasis on the extent to which they have succeeded in improving on earlier models. The models are compared with the observed field in both magnitude and direction. Finally, the application to other planetary magnetospheres of the techniques used to model the earth's magnetospheric magnetic field is briefly discussed.

  14. Integrated Environmental Modeling: Quantitative Microbial Risk Assessment

    EPA Science Inventory

    The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...

  15. 6 Principles for Quantitative Reasoning and Modeling

    ERIC Educational Resources Information Center

    Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye

    2014-01-01

    Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…

  16. Quantitative Modeling of Earth Surface Processes

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.

  17. More details...
  18. Quantitative risk modeling in aseptic manufacture.

    PubMed

    Tidswell, Edward C; McGarvey, Bernard

    2006-01-01

    Expedient risk assessment of aseptic manufacturing processes offers unique opportunities for improved and sustained assurance of product quality. Contemporary risk assessments applied to aseptic manufacturing processes, however, are commonly handicapped by assumptions and subjectivity, leading to inexactitude. Quantitative risk modeling augmented with Monte Carlo simulations represents a novel, innovative, and more efficient means of risk assessment. This technique relies upon fewer assumptions and removes subjectivity to more swiftly generate an improved, more realistic, quantitative estimate of risk. The fundamental steps and requirements for an assessment of the risk of bioburden ingress into aseptically manufactured products are described. A case study exemplifies how quantitative risk modeling and Monte Carlo simulations achieve a more rapid and improved determination of the risk of bioburden ingress during the aseptic filling of a parenteral product. Although application of quantitative risk modeling is described here purely for the purpose of process improvement, the technique has far wider relevance in the assisted disposition of batches, cleanroom management, and the utilization of real-time data from rapid microbial monitoring technologies. PMID:17089696

  19. Quantitative modeling of soil genesis processes

    NASA Technical Reports Server (NTRS)

    Levine, E. R.; Knox, R. G.; Kerber, A. G.

    1992-01-01

    For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.

  20. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  21. Quantitative indices of autophagy activity from minimal models

    PubMed Central

    2014-01-01

    Background A number of cellular- and molecular-level studies of autophagy assessment have been carried out with the help of various biochemical and morphological indices. Still there exists ambiguity for the assessment of the autophagy status and of the causal relationship between autophagy and related cellular changes. To circumvent such difficulties, we probe new quantitative indices of autophagy which are important for defining autophagy activation and further assessing its roles associated with different physiopathological states. Methods Our approach is based on the minimal autophagy model that allows us to understand underlying dynamics of autophagy from biological experiments. Specifically, based on the model, we reconstruct the experimental context-specific autophagy profiles from the target autophagy system, and two quantitative indices are defined from the model-driven profiles. The indices are then applied to the simulation-based analysis, for the specific and quantitative interpretation of the system. Results Two quantitative indices measuring autophagy activities in the induction of sequestration fluxes and in the selective degradation are proposed, based on the model-driven autophagy profiles such as the time evolution of autophagy fluxes, levels of autophagosomes/autolysosomes, and corresponding cellular changes. Further, with the help of the indices, those biological experiments of the target autophagy system have been successfully analyzed, implying that the indices are useful not only for defining autophagy activation but also for assessing its role in a specific and quantitative manner. Conclusions Such quantitative autophagy indices in conjunction with the computer-aided analysis should provide new opportunities to characterize the causal relationship between autophagy activity and the corresponding cellular change, based on the system-level understanding of the autophagic process at good time resolution, complementing the current in vivo and in

  1. The existence of amorphous phase in Portland cements: Physical factors affecting Rietveld quantitative phase analysis

    SciTech Connect

    Snellings, Ruben Bazzoni, Amélie Scrivener, Karen

    2014-05-01

    Rietveld quantitative phase analysis has become a widespread tool for the characterization of Portland cement, both for research and production control purposes. One of the major remaining points of debate is whether Portland cements contain amorphous content or not. This paper presents detailed analyses of the amorphous phase contents in a set of commercial Portland cements, clinker, synthetic alite and limestone by Rietveld refinement of X-ray powder diffraction measurements using both external and internal standard methods. A systematic study showed that the sample preparation and comminution procedure is closely linked to the calculated amorphous contents. Particle size reduction by wet-grinding lowered the calculated amorphous contents to insignificant quantities for all materials studied. No amorphous content was identified in the final analysis of the Portland cements under investigation.

  2. Facilities Management of Existing School Buildings: Two Models.

    ERIC Educational Resources Information Center

    Building Technology, Inc., Silver Spring, MD.

    While all school districts are responsible for the management of their existing buildings, they often approach the task in different ways. This document presents two models that offer ways a school district administration, regardless of size, may introduce activities into its ongoing management process that will lead to improvements in earthquake…

  3. Training of Existing Workers: Issues, Incentives and Models. Support Document

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This document was produced by the authors based on their research for the report, "Training of Existing Workers: Issues, Incentives and Models," (ED495138) and is an added resource for further information. This support document is divided into the following sections: (1) The Retail Industry--A Snapshot; (2) Case Studies--Hardware, Retail Industry…

  4. Competitive speciation in quantitative genetic models.

    PubMed

    Drossel, B; Mckane, A

    2000-06-01

    We study sympatric speciation due to competition in an environment with a broad distribution of resources. We assume that the trait under selection is a quantitative trait, and that mating is assortative with respect to this trait. Our model alternates selection according to Lotka-Volterra-type competition equations, with reproduction using the ideas of quantitative genetics. The recurrence relations defined by these equations are studied numerically and analytically. We find that when a population enters a new environment, with a broad distribution of unexploited food sources, the population distribution broadens under a variety of conditions, with peaks at the edge of the distribution indicating the formation of subpopulations. After a long enough time period, the population can split into several subpopulations with little gene flow between them. PMID:10816369

  5. On the existence of monodromies for the Rabi model

    NASA Astrophysics Data System (ADS)

    Carneiro da Cunha, Bruno; Carvalho de Almeida, Manuela; Rabelo de Queiroz, Amílcar

    2016-05-01

    We discuss the existence of monodromies associated with the singular points of the eigenvalue problem for the Rabi model. The complete control of the full monodromy data requires the taming of the Stokes phenomenon associated with the unique irregular singular point. The monodromy data, in particular, the composite monodromy, are written in terms of the parameters of the model via the isomonodromy method and the τ function of the Painlevé V. These data provide a systematic way to obtain the quantized spectrum of the Rabi model.

  6. Quantitative risk modelling for new pharmaceutical compounds.

    PubMed

    Tang, Zhengru; Taylor, Mark J; Lisboa, Paulo; Dyas, Mark

    2005-11-15

    The process of discovering and developing new drugs is long, costly and risk-laden. Faced with a wealth of newly discovered compounds, industrial scientists need to target resources carefully to discern the key attributes of a drug candidate and to make informed decisions. Here, we describe a quantitative approach to modelling the risk associated with drug development as a tool for scenario analysis concerning the probability of success of a compound as a potential pharmaceutical agent. We bring together the three strands of manufacture, clinical effectiveness and financial returns. This approach involves the application of a Bayesian Network. A simulation model is demonstrated with an implementation in MS Excel using the modelling engine Crystal Ball. PMID:16257374

  7. Quantitative Models of CAI Rim Layer Growth

    NASA Astrophysics Data System (ADS)

    Ruzicka, A.; Boynton, W. V.

    1995-09-01

    Many hypotheses have been proposed to account for the ~50 micrometer-thick layer sequences (Wark-Lovering rims) that typically surround coarse-grained Ca,Al-rich inclusions (CAIs), but to date no consensus has emerged on how these rims formed. A two-step process-- flash heating of CAIs to produce a refractory residue on the margins of CAIs [1,2,3], followed by reaction and diffusion between CAIs or the refractory residue and an external medium rich in Mg, Si and other ferromagnesian and volatile elements to form the layers [3,4,5]-- may have formed the rims. We have tested the second step of this process quantitatively, and show that many, but not all, of the layering characteristics of CAI rims in the Vigarano, Leoville, and Efremovka CV3 chondrites can be explained by steady-state reaction and diffusion between CAIs and an external medium rich in Mg and Si. Moreover, observed variations in the details of the layering from one CAI to another can be explained primarily by differences in the identity and composition of the external medium, which appears to have included vapor alone, vapor + olivine, and olivine +/- clinopyroxene +/- vapor. An idealized layer sequence for CAI rims in Vigarano, Leoville, and Efremovka can be represented as MSF|S|AM|D|O, where MSF = melilite (M) + spinel (S) + fassaite (F) in the interior of CAIs; S = spinel-rich layer; AM = a layer consisting either of anorthite (A) alone, or M alone, or both A and M; D = a clinopyroxene layer consisting mainly of aluminous diopside (D) that is zoned to fassaite towards the CAI; and O = olivine-rich layer, composed mainly of individually zoned olivine grains that apparently pre-existed layer formation [3]. A or M are absent between the S and D layers in roughly half of the rims. The O layer varies considerably in thickness (0-60 micrometers thick) and in porosity from rim to rim, with olivine grains either tightly intergrown to form a compact layer or arranged loosely on the outer surfaces of the CAIs

  8. Global quantitative modeling of chromatin factor interactions.

    PubMed

    Zhou, Jian; Troyanskaya, Olga G

    2014-03-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the "chromatin codes") remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles--we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  9. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  10. Physiologically based quantitative modeling of unihemispheric sleep.

    PubMed

    Kedziora, D J; Abeysuriya, R G; Phillips, A J K; Robinson, P A

    2012-12-01

    Unihemispheric sleep has been observed in numerous species, including birds and aquatic mammals. While knowledge of its functional role has been improved in recent years, the physiological mechanisms that generate this behavior remain poorly understood. Here, unihemispheric sleep is simulated using a physiologically based quantitative model of the mammalian ascending arousal system. The model includes mutual inhibition between wake-promoting monoaminergic nuclei (MA) and sleep-promoting ventrolateral preoptic nuclei (VLPO), driven by circadian and homeostatic drives as well as cholinergic and orexinergic input to MA. The model is extended here to incorporate two distinct hemispheres and their interconnections. It is postulated that inhibitory connections between VLPO nuclei in opposite hemispheres are responsible for unihemispheric sleep, and it is shown that contralateral inhibitory connections promote unihemispheric sleep while ipsilateral inhibitory connections promote bihemispheric sleep. The frequency of alternating unihemispheric sleep bouts is chiefly determined by sleep homeostasis and its corresponding time constant. It is shown that the model reproduces dolphin sleep, and that the sleep regimes of humans, cetaceans, and fur seals, the latter both terrestrially and in a marine environment, require only modest changes in contralateral connection strength and homeostatic time constant. It is further demonstrated that fur seals can potentially switch between their terrestrial bihemispheric and aquatic unihemispheric sleep patterns by varying just the contralateral connection strength. These results provide experimentally testable predictions regarding the differences between species that sleep bihemispherically and unihemispherically. PMID:22960411

  11. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  12. First Principles Quantitative Modeling of Molecular Devices

    NASA Astrophysics Data System (ADS)

    Ning, Zhanyu

    In this thesis, we report theoretical investigations of nonlinear and nonequilibrium quantum electronic transport properties of molecular transport junctions from atomistic first principles. The aim is to seek not only qualitative but also quantitative understanding of the corresponding experimental data. At present, the challenges to quantitative theoretical work in molecular electronics include two most important questions: (i) what is the proper atomic model for the experimental devices? (ii) how to accurately determine quantum transport properties without any phenomenological parameters? Our research is centered on these questions. We have systematically calculated atomic structures of the molecular transport junctions by performing total energy structural relaxation using density functional theory (DFT). Our quantum transport calculations were carried out by implementing DFT within the framework of Keldysh non-equilibrium Green's functions (NEGF). The calculated data are directly compared with the corresponding experimental measurements. Our general conclusion is that quantitative comparison with experimental data can be made if the device contacts are correctly determined. We calculated properties of nonequilibrium spin injection from Ni contacts to octane-thiolate films which form a molecular spintronic system. The first principles results allow us to establish a clear physical picture of how spins are injected from the Ni contacts through the Ni-molecule linkage to the molecule, why tunnel magnetoresistance is rapidly reduced by the applied bias in an asymmetric manner, and to what extent ab initio transport theory can make quantitative comparisons to the corresponding experimental data. We found that extremely careful sampling of the two-dimensional Brillouin zone of the Ni surface is crucial for accurate results in such a spintronic system. We investigated the role of contact formation and its resulting structures to quantum transport in several molecular

  13. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  14. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  15. Quantitative modeling of multiscale neural activity

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Rennie, Christopher J.

    2007-01-01

    The electrical activity of the brain has been observed for over a century and is widely used to probe brain function and disorders, chiefly through the electroencephalogram (EEG) recorded by electrodes on the scalp. However, the connections between physiology and EEGs have been chiefly qualitative until recently, and most uses of the EEG have been based on phenomenological correlations. A quantitative mean-field model of brain electrical activity is described that spans the range of physiological and anatomical scales from microscopic synapses to the whole brain. Its parameters measure quantities such as synaptic strengths, signal delays, cellular time constants, and neural ranges, and are all constrained by independent physiological measurements. Application of standard techniques from wave physics allows successful predictions to be made of a wide range of EEG phenomena, including time series and spectra, evoked responses to stimuli, dependence on arousal state, seizure dynamics, and relationships to functional magnetic resonance imaging (fMRI). Fitting to experimental data also enables physiological parameters to be infered, giving a new noninvasive window into brain function, especially when referenced to a standardized database of subjects. Modifications of the core model to treat mm-scale patchy interconnections in the visual cortex are also described, and it is shown that resulting waves obey the Schroedinger equation. This opens the possibility of classical cortical analogs of quantum phenomena.

  16. Toward quantitative modeling of silicon phononic thermocrystals

    NASA Astrophysics Data System (ADS)

    Lacatena, V.; Haras, M.; Robillard, J.-F.; Monfray, S.; Skotnicki, T.; Dubois, E.

    2015-03-01

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of "thermocrystals" or "nanophononic crystals" that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known "electron crystal-phonon glass" dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  17. Toward quantitative modeling of silicon phononic thermocrystals

    SciTech Connect

    Lacatena, V.; Haras, M.; Robillard, J.-F. Dubois, E.; Monfray, S.; Skotnicki, T.

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  18. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  19. Existence of Periodic Solutions for a Modified Growth Solow Model

    NASA Astrophysics Data System (ADS)

    Fabião, Fátima; Borges, Maria João

    2010-10-01

    In this paper we analyze the dynamic of the Solow growth model with a Cobb-Douglas production function. For this purpose, we consider that the labour growth rate, L'(t)/L(t), is a T-periodic function, for a fixed positive real number T. We obtain the closed form solutions for the fundamental Solow equation with the new description of L(t). Using notions of the qualitative theory of ordinary differential equations and nonlinear functional analysis, we prove that there exists one T-periodic solution for the Solow equation. From the economic point of view this is a new result which allows a more realistic interpretation of the stylized facts.

  20. Comparative Application of Capacity Models for Seismic Vulnerability Evaluation of Existing RC Structures

    SciTech Connect

    Faella, C.; Lima, C.; Martinelli, E.; Nigro, E.

    2008-07-08

    Seismic vulnerability assessment of existing buildings is one of the most common tasks in which Structural Engineers are currently engaged. Since, its is often a preliminary step to approach the issue of how to retrofit non-seismic designed and detailed structures, it plays a key role in the successful choice of the most suitable strengthening technique. In this framework, the basic information for both seismic assessment and retrofitting is related to the formulation of capacity models for structural members. Plenty of proposals, often contradictory under the quantitative standpoint, are currently available within the technical and scientific literature for defining the structural capacity in terms of force and displacements, possibly with reference to different parameters representing the seismic response. The present paper shortly reviews some of the models for capacity of RC members and compare them with reference to two case studies assumed as representative of a wide class of existing buildings.

  1. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  2. Quantitative assessment of computational models for retinotopic map formation.

    PubMed

    Hjorth, J J Johannes; Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2015-06-01

    Molecular and activity-based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity-based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2-EphA3(ki/ki), Isl2-EphA3(ki/+), ephrin-A2,A3,A5 triple knock-out (TKO), and Math5(-/-) (Atoh7). Two models successfully reproduced the extent of the Math5(-/-) anteromedial projection, but only one of those could account for the collapse point in Isl2-EphA3(ki/+). The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin-A2,A3,A5 TKO phenotype, suggesting either an incomplete knock-out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. PMID:25367067

  3. Quantitative assessment of computational models for retinotopic map formation

    PubMed Central

    Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2014-01-01

    ABSTRACT Molecular and activity‐based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity‐based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2‐EphA3 ki/ki, Isl2‐EphA3 ki/+, ephrin‐A2,A3,A5 triple knock‐out (TKO), and Math5 −/− (Atoh7). Two models successfully reproduced the extent of the Math5 −/− anteromedial projection, but only one of those could account for the collapse point in Isl2‐EphA3 ki/+. The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin‐A2,A3,A5 TKO phenotype, suggesting either an incomplete knock‐out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. © 2014 Wiley Periodicals, Inc. Develop Neurobiol 75: 641–666, 2015 PMID:25367067

  4. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    SciTech Connect

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  5. Quantitative analysis of numerical solvers for oscillatory biomolecular system models

    PubMed Central

    Quo, Chang F; Wang, May D

    2008-01-01

    Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible

  6. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals

    EPA Science Inventory

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This manuscript reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, inver...

  7. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    PubMed

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator. PMID:26451831

  8. Existing Soil Carbon Models Do Not Apply to Forested Wetlands.

    SciTech Connect

    Trettin, C C; Song, B; Jurgensen, M F; Li, C

    2001-09-14

    Evaluation of 12 widely used soil carbon models to determine applicability to wetland ecosystems. For any land area that includes wetlands, none of the individual models would produce reasonable simulations based on soil processes. Study presents a wetland soil carbon model framework based on desired attributes, the DNDC model and components of the CENTURY and WMEM models. Proposed synthesis would be appropriate when considering soil carbon dynamics at multiple spatial scales and where the land area considered includes both wetland and upland ecosystems.

  9. Training of Existing Workers: Issues, Incentives and Models

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This report presents issues associated with incentives for training existing workers in small to medium-sized firms, identified through a small sample of case studies from the retail, manufacturing, and building and construction industries. While the majority of employers recognise workforce skill levels are fundamental to the success of the…

  10. Comparison of Existing Response Criteria in Patients with Hepatocellular Carcinoma Treated with Transarterial Chemoembolization Using a 3D Quantitative Approach

    PubMed Central

    Tacher, Vania; Lin, MingDe; Duran, Rafael; Yarmohammadi, Hooman; Lee, Howard; Chapiro, Julius; Chao, Michael; Wang, Zhijun; Frangakis, Constantine; Sohn, Jae Ho; Maltenfort, Mitchell Gil; Pawlik, Timothy; Geschwind, Jean-François

    2015-01-01

    Purpose To compare currently available non-three-dimensional methods (Response Evaluation Criteria in Solid Tumors [RECIST], European Association for Study of the Liver [EASL], modified RECIST [mRECIST[) with three-dimensional (3D) quantitative methods of the index tumor as early response markers in predicting patient survival after initial transcatheter arterial chemoembolization (TACE). Materials and Methods This was a retrospective single-institution HIPAA-compliant and institutional review board–approved study. From November 2001 to November 2008, 491 consecutive patients underwent intraarterial therapy for liver cancer with either conventional TACE or TACE with drug-eluting beads. A diagnosis of hepatocellular carcinoma (HCC) was made in 290 of these patients. The response of the index tumor on pre- and post-TACE magnetic resonance images was assessed retrospectively in 78 treatment-naïve patients with HCC (63 male; mean age, 63 years ± 11 [standard deviation]). Each response assessment method (RECIST, mRECIST, EASL, and 3D methods of volumetric RECIST [vRECIST] and quantitative EASL [qEASL]) was used to classify patients as responders or nonresponders by following standard guidelines for the uni- and bidimensional measurements and by using the formula for a sphere for the 3D measurements. The Kaplan-Meier method with the log-rank test was performed for each method to evaluate its ability to help predict survival of responders and nonresponders. Uni- and multivariate Cox proportional hazard ratio models were used to identify covariates that had significant association with survival. Results The uni- and bidimensional measurements of RECIST (hazard ratio, 0.6; 95% confidence interval [CI]: 0.3, 1.0; P = .09), mRECIST (hazard ratio, 0.6; 95% CI: 0.6, 1.0; P = .05), and EASL (hazard ratio, 1.1; 95% CI: 0.6, 2.2; P = .75) did not show a significant difference in survival between responders and nonresponders, whereas vRECIST (hazard ratio, 0.6; 95% CI: 0.3, 1

  11. A Quantitative Software Risk Assessment Model

    NASA Technical Reports Server (NTRS)

    Lee, Alice

    2002-01-01

    This slide presentation reviews a risk assessment model as applied to software development. the presentation uses graphs to demonstrate basic concepts of software reliability. It also discusses the application to the risk model to the software development life cycle.

  12. Mathematical Existence Results for the Doi-Edwards Polymer Model

    NASA Astrophysics Data System (ADS)

    Chupin, Laurent

    2016-07-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  13. What Are We Doing When We Translate from Quantitative Models?

    PubMed Central

    Critchfield, Thomas S; Reed, Derek D

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may convey concepts that are difficult to capture in words. To support this point, we provide a nontechnical introduction to selected aspects of quantitative analysis; consider some issues that translational investigators (and, potentially, practitioners) confront when attempting to translate from quantitative models; and discuss examples of relevant translational studies. We conclude that, where behavior-science translation is concerned, the quantitative features of quantitative models cannot be ignored without sacrificing conceptual precision, scientific and practical insights, and the capacity of the basic and applied wings of behavior analysis to communicate effectively. PMID:22478533

  14. A review: Quantitative models for lava flows on Mars

    NASA Technical Reports Server (NTRS)

    Baloga, S. M.

    1987-01-01

    The purpose of this abstract is to review and assess the application of quantitative models (Gratz numerical correlation model, radiative loss model, yield stress model, surface structure model, and kinematic wave model) of lava flows on Mars. These theoretical models were applied to Martian flow data to aid in establishing the composition of the lava or to determine other eruption conditions such as eruption rate or duration.

  15. Application of existing design software to problems in neuronal modeling.

    PubMed

    Vranić-Sowers, S; Fleshman, J W

    1994-03-01

    In this communication, we describe the application of the Valid/Analog Design Tools circuit simulation package called PC Workbench to the problem of modeling the electrical behavior of neural tissue. A nerve cell representation as an equivalent electrical circuit using compartmental models is presented. Several types of nonexcitable and excitable membranes are designed, and simulation results for different types of electrical stimuli are compared to the corresponding analytical data. It is shown that the hardware/software platform and the models developed constitute an accurate, flexible, and powerful way to study neural tissue. PMID:8045583

  16. A Transformative Model for Undergraduate Quantitative Biology Education

    ERIC Educational Resources Information Center

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating…

  17. Existence of solutions for a host-parasite model

    NASA Astrophysics Data System (ADS)

    Milner, Fabio Augusto; Patton, Curtis Allan

    2001-12-01

    The sea bass Dicentrarchus labrax has several gill ectoparasites. Diplectanum aequans (Plathelminth, Monogenea) is one of these species. Under certain demographic conditions, this flat worm can trigger pathological problems, in particular in fish farms. The life cycle of the parasite is described and a model for the dynamics of its interaction with the fish is described and analyzed. The model consists of a coupled system of ordinary differential equations and one integro-differential equation.

  18. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  19. Towards a quantitative model of the post-synaptic proteome.

    PubMed

    Sorokina, Oksana; Sorokin, Anatoly; Armstrong, J Douglas

    2011-10-01

    The postsynaptic compartment of the excitatory glutamatergic synapse contains hundreds of distinct polypeptides with a wide range of functions (signalling, trafficking, cell-adhesion, etc.). Structural dynamics in the post-synaptic density (PSD) are believed to underpin cognitive processes. Although functionally and morphologically diverse, PSD proteins are generally enriched with specific domains, which precisely define the mode of clustering essential for signal processing. We applied a stochastic calculus of domain binding provided by a rule-based modelling approach to formalise the highly combinatorial signalling pathway in the PSD and perform the numerical analysis of the relative distribution of protein complexes and their sizes. We specified the combinatorics of protein interactions in the PSD by rules, taking into account protein domain structure, specific domain affinity and relative protein availability. With this model we interrogated the critical conditions for the protein aggregation into large complexes and distribution of both size and composition. The presented approach extends existing qualitative protein-protein interaction maps by considering the quantitative information for stoichiometry and binding properties for the elements of the network. This results in a more realistic view of the postsynaptic proteome at the molecular level. PMID:21874189

  20. Exploring Higher Education Business Models ("If Such a Thing Exists")

    ERIC Educational Resources Information Center

    Harney, John O.

    2013-01-01

    The global economic recession has caused students, parents, and policymakers to reevaluate personal and societal investments in higher education--and has prompted the realization that traditional higher ed "business models" may be unsustainable. Predicting a shakeout, most presidents expressed confidence for their own school's ability to…

  1. Quantitative model of the Cerro Prieto field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-03-01

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  2. Quantitative Model of the Cerro Prieto Field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-01-21

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  3. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  4. Quantitative modeling of quartz vein sealing

    NASA Astrophysics Data System (ADS)

    Wendler, Frank; Okamoto, Atsushi; Schwarz, Jens-Oliver; Enzmann, Frieder; Blum, Philipp

    2014-05-01

    Mineral precipitation significantly effects many aspects of fluid-rock interaction across all length scales, as the dynamical change of permeability, of mechanical interaction and redistribution of dissolved material. The hydrothermal growth of quartz establishes one of the most important mineralization processes in fractures. Tectonically caused fracturing, deformation and fluid transport leaves clear detectable traces in the microstructure of the mineralized veins. As these patterns give hints on the deformation history and the fluid pathways through former fracture networks, accurate spatio-temporal modeling of vein mineralization is of special interest, and the objective of this study. Due to the intricate polycrystalline geometries involved, the underlying physical processes like diffusion, advection and crystal growth have to be captured at the grain scale. To this end, we adapt a thermodynamically consistent phase-field model (PFM), which combines a kinetic growth law and mass transport equations with irreversible thermodynamics of interfaces and bulk phases. Each grain in the simulation domain is captured by a phase field with individual orientation given by three Euler angles. The model evolves in discrete time steps using a finite difference algorithm on a regular grid, optimized for large grain assemblies. The underlying processes are highly nonlinear, and for geological samples, boundary conditions as well as many of the physical parameters are not precisely known. One motivation in this study is to validate the adequately parameterized model vs. hydrothermal experiments under defined (p,T,c) conditions. Different from former approaches in vein growth simulation, the PFM is configured using thermodynamic data from established geochemical models. Previously conducted batch flow experiments of hydrothermal quartz growth were analyzed with electron backscatter diffraction (EBSD) and used to calibrate the unknown kinetic anisotropy parameters. In the

  5. Steps toward quantitative infrasound propagation modeling

    NASA Astrophysics Data System (ADS)

    Waxler, Roger; Assink, Jelle; Lalande, Jean-Marie; Velea, Doru

    2016-04-01

    Realistic propagation modeling requires propagation models capable of incorporating the relevant physical phenomena as well as sufficiently accurate atmospheric specifications. The wind speed and temperature gradients in the atmosphere provide multiple ducts in which low frequency sound, infrasound, can propagate efficiently. The winds in the atmosphere are quite variable, both temporally and spatially, causing the sound ducts to fluctuate. For ground to ground propagation the ducts can be borderline in that small perturbations can create or destroy a duct. In such cases the signal propagation is very sensitive to fluctuations in the wind, often producing highly dispersed signals. The accuracy of atmospheric specifications is constantly improving as sounding technology develops. There is, however, a disconnect between sound propagation and atmospheric specification in that atmospheric specifications are necessarily statistical in nature while sound propagates through a particular atmospheric state. In addition infrasonic signals can travel to great altitudes, on the order of 120 km, before refracting back to earth. At such altitudes the atmosphere becomes quite rare causing sound propagation to become highly non-linear and attenuating. Approaches to these problems will be presented.

  6. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  7. Refining the quantitative pathway of the Pathways to Mathematics model.

    PubMed

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. PMID:25521665

  8. Stoffenmanager exposure model: development of a quantitative algorithm.

    PubMed

    Tielemans, Erik; Noy, Dook; Schinkel, Jody; Heussen, Henri; Van Der Schaaf, Doeke; West, John; Fransman, Wouter

    2008-08-01

    In The Netherlands, the web-based tool called 'Stoffenmanager' was initially developed to assist small- and medium-sized enterprises to prioritize and control risks of handling chemical products in their workplaces. The aim of the present study was to explore the accuracy of the Stoffenmanager exposure algorithm. This was done by comparing its semi-quantitative exposure rankings for specific substances with exposure measurements collected from several occupational settings to derive a quantitative exposure algorithm. Exposure data were collected using two strategies. First, we conducted seven surveys specifically for validation of the Stoffenmanager. Second, existing occupational exposure data sets were collected from various sources. This resulted in 378 and 320 measurements for solid and liquid scenarios, respectively. The Spearman correlation coefficients between Stoffenmanager scores and exposure measurements appeared to be good for handling solids (r(s) = 0.80, N = 378, P < 0.0001) and liquid scenarios (r(s) = 0.83, N = 320, P < 0.0001). However, the correlation for liquid scenarios appeared to be lower when calculated separately for sets of volatile substances with a vapour pressure >10 Pa (r(s) = 0.56, N = 104, P < 0.0001) and non-volatile substances with a vapour pressure < or =10 Pa (r(s) = 0.53, N = 216, P < 0.0001). The mixed-effect regression models with natural log-transformed Stoffenmanager scores as independent parameter explained a substantial part of the total exposure variability (52% for solid scenarios and 76% for liquid scenarios). Notwithstanding the good correlation, the data show substantial variability in exposure measurements given a certain Stoffenmanager score. The overall performance increases our confidence in the use of the Stoffenmanager as a generic tool for risk assessment. The mixed-effect regression models presented in this paper may be used for assessment of so-called reasonable worst case exposures. This evaluation is

  9. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals.

    PubMed

    Gobas, Frank A P C; Burkhard, Lawrence P; Doucette, William J; Sappington, Keith G; Verbruggen, Eric M J; Hope, Bruce K; Bonnell, Mark A; Arnot, Jon A; Tarazona, Jose V

    2016-01-01

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This article reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, invertebrate, mammal, and avian species and for entire terrestrial food webs, including some that consider spatial factors. Limitations and gaps in terrestrial bioaccumulation modeling include the lack of QSARs for biotransformation and dietary assimilation efficiencies for terrestrial species; the lack of models and QSARs for important terrestrial species such as insects, amphibians and reptiles; the lack of standardized testing protocols for plants with limited development of plant models; and the limited chemical domain of existing bioaccumulation models and QSARs (e.g., primarily applicable to nonionic organic chemicals). There is an urgent need for high-quality field data sets for validating models and assessing their performance. There is a need to improve coordination among laboratory, field, and modeling efforts on bioaccumulative substances in order to improve the state of the science for challenging substances. PMID:26272325

  10. Lessons learned from quantitative dynamical modeling in systems biology.

    PubMed

    Raue, Andreas; Schilling, Marcel; Bachmann, Julie; Matteson, Andrew; Schelker, Max; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D; Theis, Fabian J; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642

  11. Quantitative and logic modelling of gene and molecular networks

    PubMed Central

    Le Novère, Nicolas

    2015-01-01

    Behaviours of complex biomolecular systems are often irreducible to the elementary properties of their individual components. Explanatory and predictive mathematical models are therefore useful for fully understanding and precisely engineering cellular functions. The development and analyses of these models require their adaptation to the problems that need to be solved and the type and amount of available genetic or molecular data. Quantitative and logic modelling are among the main methods currently used to model molecular and gene networks. Each approach comes with inherent advantages and weaknesses. Recent developments show that hybrid approaches will become essential for further progress in synthetic biology and in the development of virtual organisms. PMID:25645874

  12. Sensitivity, noise and quantitative model of Laser Speckle Contrast Imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai

    In the dissertation, I present several studies on Laser Speckle Contrast Imaging (LSCI). The two major goals of those studies are: (1) to improve the signal-noise-ratio (SNR) of LSCI so it can be used to detect small blood flow change due to brain activities; (2) to find a reliable quantitative model so LSCI results can be compared among experiments and subjects and even with results from other blood flow monitoring techniques. We sought to improve SNR in the following ways: (1) We investigated the relationship between exposure time and the sensitivities of LSCI. We found that relative sensitivity reaches its maximum at an exposure time of around 5 ms. (2) We studied the relationship between laser speckle and camera aperture stop, which is actually the relationship between laser speckle and speckle/pixel size ratio. In general, speckle and pixel size should be approximately 1.5 - 2 to reach the maximum of detection factor beta as well as speckle contrast (SC) value and absolute sensitivity. This is also an important study for quantitative model development. (3) We worked on noise analysis and modeling. Noise affects both SNR and quantitative model. Usually random noise is more critical for SNR analysis. The main random noises in LSCI are statistical noise and physiological noise. Some physiological noises are caused by the small motions induced by heart beat or breathing. These are periodic and can be eliminated using methods discussed in this dissertation. Statistical noise is more fundamental and cannot be eliminated entirely. However it can be greatly reduced by increasing the effective pixel number N for speckle contrast processing. To develop the quantitative model, we did the following: (1) We considered more experimental factors in the quantitative model and removed several ideal case assumptions. In particular, in our model we considered the general detection factor beta, static scatterers and systematic noise. A simple calibration procedure is suggested

  13. Existence of almost periodic solution of a model of phytoplankton allelopathy with delay

    NASA Astrophysics Data System (ADS)

    Abbas, Syed; Mahto, Lakshman

    2012-09-01

    In this paper we discuss a non-autonomous two species competitive allelopathic phytoplankton model in which both species are producing chemical which stimulate the growth of each other. We have studied the existence and uniqueness of an almost periodic solution for the concerned model system. Sufficient conditions are derived for the existence of a unique almost periodic solution.

  14. Transgenic models of Alzheimer's disease: better utilization of existing models through viral transgenesis.

    PubMed

    Platt, Thomas L; Reeves, Valerie L; Murphy, M Paul

    2013-09-01

    Animal models have been used for decades in the Alzheimer's disease (AD) research field and have been crucial for the advancement of our understanding of the disease. Most models are based on familial AD mutations of genes involved in the amyloidogenic process, such as the amyloid precursor protein (APP) and presenilin 1 (PS1). Some models also incorporate mutations in tau (MAPT) known to cause frontotemporal dementia, a neurodegenerative disease that shares some elements of neuropathology with AD. While these models are complex, they fail to display pathology that perfectly recapitulates that of the human disease. Unfortunately, this level of pre-existing complexity creates a barrier to the further modification and improvement of these models. However, as the efficacy and safety of viral vectors improves, their use as an alternative to germline genetic modification is becoming a widely used research tool. In this review we discuss how this approach can be used to better utilize common mouse models in AD research. This article is part of a Special Issue entitled: Animal Models of Disease. PMID:23619198

  15. Quantitative modeling of facet development in ventifacts by sand abrasion

    NASA Astrophysics Data System (ADS)

    Várkonyi, Péter L.; Laity, Julie E.; Domokos, Gábor

    2016-03-01

    We use a quantitative model to examine rock abrasion by direct impacts of sand grains. Two distinct mechanisms are uncovered (unidirectional and isotropic), which contribute to the macro-scale morphological characters (sharp edges and flat facets) of ventifacts. It is found that facet formation under conditions of a unidirectional wind relies on certain mechanical properties of the rock material, and we confirm the dominant role of this mechanism in the formation of large ventifacts. Nevertheless small ventifacts may also be shaped to polyhedral shapes in a different way (isotropic mechanism), which is not sensitive to wind characteristics nor to rock material properties. The latter mechanism leads to several 'mature' shapes, which are surprisingly analogous to the morphologies of typical small ventifacts. Our model is also able to explain certain quantitative laboratory and field observations, including quick decay of facet angles of ventifacts followed by stabilization in the range 20-30°.

  16. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  17. Quantitative phase-field modeling of dendritic electrodeposition.

    PubMed

    Cogswell, Daniel A

    2015-07-01

    A thin-interface phase-field model of electrochemical interfaces is developed based on Marcus kinetics for concentrated solutions, and used to simulate dendrite growth during electrodeposition of metals. The model is derived in the grand electrochemical potential to permit the interface to be widened to reach experimental length and time scales, and electroneutrality is formulated to eliminate the Debye length. Quantitative agreement is achieved with zinc Faradaic reaction kinetics, fractal growth dimension, tip velocity, and radius of curvature. Reducing the exchange current density is found to suppress the growth of dendrites, and screening electrolytes by their exchange currents is suggested as a strategy for controlling dendrite growth in batteries. PMID:26274118

  18. Quantitative analysis of a wind energy conversion model

    NASA Astrophysics Data System (ADS)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-03-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s-1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is cp = 0.15. The v3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively.

  19. Quantitative phase-field modeling of dendritic electrodeposition

    NASA Astrophysics Data System (ADS)

    Cogswell, Daniel A.

    2015-07-01

    A thin-interface phase-field model of electrochemical interfaces is developed based on Marcus kinetics for concentrated solutions, and used to simulate dendrite growth during electrodeposition of metals. The model is derived in the grand electrochemical potential to permit the interface to be widened to reach experimental length and time scales, and electroneutrality is formulated to eliminate the Debye length. Quantitative agreement is achieved with zinc Faradaic reaction kinetics, fractal growth dimension, tip velocity, and radius of curvature. Reducing the exchange current density is found to suppress the growth of dendrites, and screening electrolytes by their exchange currents is suggested as a strategy for controlling dendrite growth in batteries.

  20. Quantitative magnetospheric models derived from spacecraft magnetometer data

    NASA Technical Reports Server (NTRS)

    Mead, G. D.; Fairfield, D. H.

    1973-01-01

    Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.

  1. Strong existence and uniqueness of the stationary distribution for a stochastic inviscid dyadic model

    NASA Astrophysics Data System (ADS)

    Andreis, Luisa; Barbato, David; Collet, Francesca; Formentin, Marco; Provenzano, Luigi

    2016-03-01

    We consider an inviscid stochastically forced dyadic model, where the additive noise acts only on the first component. We prove that a strong solution for this problem exists and is unique by means of uniform energy estimates. Moreover, we exploit these results to establish strong existence and uniqueness of the stationary distribution.

  2. Quantitative modeling of transcription factor binding specificities using DNA shape.

    PubMed

    Zhou, Tianyin; Shen, Ning; Yang, Lin; Abe, Namiko; Horton, John; Mann, Richard S; Bussemaker, Harmen J; Gordân, Raluca; Rohs, Remo

    2015-04-14

    DNA binding specificities of transcription factors (TFs) are a key component of gene regulatory processes. Underlying mechanisms that explain the highly specific binding of TFs to their genomic target sites are poorly understood. A better understanding of TF-DNA binding requires the ability to quantitatively model TF binding to accessible DNA as its basic step, before additional in vivo components can be considered. Traditionally, these models were built based on nucleotide sequence. Here, we integrated 3D DNA shape information derived with a high-throughput approach into the modeling of TF binding specificities. Using support vector regression, we trained quantitative models of TF binding specificity based on protein binding microarray (PBM) data for 68 mammalian TFs. The evaluation of our models included cross-validation on specific PBM array designs, testing across different PBM array designs, and using PBM-trained models to predict relative binding affinities derived from in vitro selection combined with deep sequencing (SELEX-seq). Our results showed that shape-augmented models compared favorably to sequence-based models. Although both k-mer and DNA shape features can encode interdependencies between nucleotide positions of the binding site, using DNA shape features reduced the dimensionality of the feature space. In addition, analyzing the feature weights of DNA shape-augmented models uncovered TF family-specific structural readout mechanisms that were not revealed by the DNA sequence. As such, this work combines knowledge from structural biology and genomics, and suggests a new path toward understanding TF binding and genome function. PMID:25775564

  3. Quantitative structure property relationship modeling of excipient properties for prediction of formulation characteristics.

    PubMed

    Gaikwad, Vinod L; Bhatia, Neela M; Desai, Sujit A; Bhatia, Manish S

    2016-10-20

    Quantitative structure property relationship (QSPR) is used to relate the excipient descriptors with the formulation properties. A QSPR model is developed by regression analysis of selected descriptors contributing towards the targeted formulation properties. Developed QSPR model is validated by the true external method where it showed good accuracy and precision in predicting the formulation composition as experimental t90% (61.35min) is observed very close to predicted t90% (67.37min). Hence, QSPR approach saves resources by predicting drug release from an unformulated formulation; avoiding repetitive trials in the development of a new formulation and/or optimization of existing one. PMID:27474604

  4. The conceptual approach to quantitative modeling of guard cells

    PubMed Central

    Blatt, Michael R.; Hills, Adrian; Chen, Zhong-Hua; Wang, Yizhou; Papanatsiou, Maria; Lew, Vigilio L.

    2013-01-01

    Much of the 70% of global water usage associated with agriculture passes through stomatal pores of plant leaves. The guard cells, which regulate these pores, thus have a profound influence on photosynthetic carbon assimilation and water use efficiency of plants. We recently demonstrated how quantitative mathematical modeling of guard cells with the OnGuard modeling software yields detail sufficient to guide phenotypic and mutational analysis. This advance represents an all-important step toward applications in directing “reverse-engineering” of guard cell function for improved water use efficiency and carbon assimilation. OnGuard is nonetheless challenging for those unfamiliar with a modeler’s way of thinking. In practice, each model construct represents a hypothesis under test, to be discarded, validated or refined by comparisons between model predictions and experimental results. The few guidelines set out here summarize the standard and logical starting points for users of the OnGuard software. PMID:23221747

  5. A quantitative coarse-grain model for lipid bilayers.

    PubMed

    Orsi, Mario; Haubertin, David Y; Sanderson, Wendy E; Essex, Jonathan W

    2008-01-24

    A simplified particle-based computer model for hydrated phospholipid bilayers has been developed and applied to quantitatively predict the major physical features of fluid-phase biomembranes. Compared with available coarse-grain methods, three novel aspects are introduced. First, the main electrostatic features of the system are incorporated explicitly via charges and dipoles. Second, water is accurately (yet efficiently) described, on an individual level, by the soft sticky dipole model. Third, hydrocarbon tails are modeled using the anisotropic Gay-Berne potential. Simulations are conducted by rigid-body molecular dynamics. Our technique proves 2 orders of magnitude less demanding of computational resources than traditional atomic-level methodology. Self-assembled bilayers quantitatively reproduce experimental observables such as electron density, compressibility moduli, dipole potential, lipid diffusion, and water permeability. The lateral pressure profile has been calculated, along with the elastic curvature constants of the Helfrich expression for the membrane bending energy; results are consistent with experimental estimates and atomic-level simulation data. Several of the results presented have been obtained for the first time using a coarse-grain method. Our model is also directly compatible with atomic-level force fields, allowing mixed systems to be simulated in a multiscale fashion. PMID:18085766

  6. Quantitative comparison between crowd models for evacuation planning and evaluation

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.

    2014-02-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.

  7. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. PMID:24130119

  8. Three models intercomparison for Quantitative Precipitation Forecast over Calabria

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Lavagnini, A.; Accadia, C.; Mariani, S.; Casaioli, M.

    2004-11-01

    In the framework of the National Project “Sviluppo di distretti industriali per le Osservazioni della Terra” (Development of Industrial Districts for Earth Observations) funded by MIUR (Ministero dell'Università e della Ricerca Scientifica --Italian Ministry of the University and Scientific Research) two operational mesoscale models were set-up for Calabria, the southernmost tip of the Italian peninsula. Models are RAMS (Regional Atmospheric Modeling System) and MM5 (Mesoscale Modeling 5) that are run every day at Crati scrl to produce weather forecast over Calabria (http://www.crati.it). This paper reports model intercomparison for Quantitative Precipitation Forecast evaluated for a 20 month period from 1th October 2000 to 31th May 2002. In addition to RAMS and MM5 outputs, QBOLAM rainfall fields are available for the period selected and included in the comparison. This model runs operationally at “Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici”. Forecasts are verified comparing models outputs with raingauge data recorded by the regional meteorological network, which has 75 raingauges. Large-scale forcing is the same for all models considered and differences are due to physical/numerical parameterizations and horizontal resolutions. QPFs show differences between models. Largest differences are for BIA compared to the other considered scores. Performances decrease with increasing forecast time for RAMS and MM5, whilst QBOLAM scores better for second day forecast.

  9. Quantitative model of magnetic coupling between solar wind and magnetosphere

    NASA Technical Reports Server (NTRS)

    Toffoletto, F. R.; Hill, T. W.

    1986-01-01

    Preliminary results are presented of a quantitative three-dimensional model of an open steady-state magnetosphere configuration incorporating a normal-component distribution corresponding to the subsolar merging-line hypothesis. The distribution of the normal magnetic-field component at the magnetopause is used as input and is used to calculate an interconnection magnetic field that links the internal and external fields. The interconnected field is then used to map the solar-wind electric field onto the polar cap. The resulting polar-cap flow patterns are found to be in agreement with observations.

  10. Quantitative Modeling of Single Atom High Harmonic Generation

    SciTech Connect

    Gordon, Ariel; Kaertner, Franz X.

    2005-11-25

    It is shown by comparison with numerical solutions of the Schroedinger equation that the three step model (TSM) of high harmonic generation (HHG) can be improved to give a quantitatively reliable description of the process. Excellent agreement is demonstrated for the H atom and the H{sub 2}{sup +} molecular ion. It is shown that the standard TSM heavily distorts the HHG spectra, especially of H{sub 2}{sup +}, and an explanation is presented for this behavior. Key to the improvement is the use of the Ehrenfest theorem in the TSM.

  11. A Team Mental Model Perspective of Pre-Quantitative Risk

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  12. Quantitative modeling of soil sorption for xenobiotic chemicals

    SciTech Connect

    Sabljic, A. )

    1989-11-01

    Experimentally determining soil sorption behavior of xenobiotic chemicals during the last 10 years has been costly, time-consuming, and very tedious. Since an estimated 100,000 chemicals are currently in common use and new chemicals are registered at a rate of 1000 per year, it is obvious that our human and material resources are insufficient to experimentally obtain their soil sorption data. Much work is being done to find alternative methods that will enable us to accurately and rapidly estimate the soil sorption coefficients of pesticides and other classes of organic pollutants. Empirical models, based on water solubility and n-octanol/water partition coefficients, have been proposed as alternative, accurate methods to estimate soil sorption coefficients. An analysis of the models has shown (a) low precision of water solubility and n-octanol/water partition data, (b) varieties of quantitative models describing the relationship between the soil sorption and above-mentioned properties, and (c) violations of some basic statistical laws when these quantitative models were developed. During the last 5 years considerable efforts were made to develop nonempirical models that are free of errors imminent to all models based on empirical variables. Thus far molecular topology has been shown to be the most successful structural property for describing and predicting soil sorption coefficients. The first-order molecular connectivity index was demonstrated to correlate extremely well with the soil sorption coefficients of polycyclic aromatic hydrocarbons (PAHs), alkylbenzenes, chlorobenzenes, chlorinated alkanes and alkenes, heterocyclic and heterosubstituted PAHs, and halogenated phenols. The average difference between predicted and observed soil sorption coefficients is only 0.2 on the logarithmic scale (corresponding to a factor of 1.5). 63 references.

  13. Quantitative modeling of soil sorption for xenobiotic chemicals.

    PubMed Central

    Sabljić, A

    1989-01-01

    Experimentally determining soil sorption behavior of xenobiotic chemicals during the last 10 years has been costly, time-consuming, and very tedious. Since an estimated 100,000 chemicals are currently in common use and new chemicals are registered at a rate of 1000 per year, it is obvious that our human and material resources are insufficient to experimentally obtain their soil sorption data. Much work is being done to find alternative methods that will enable us to accurately and rapidly estimate the soil sorption coefficients of pesticides and other classes of organic pollutants. Empirical models, based on water solubility and n-octanol/water partition coefficients, have been proposed as alternative, accurate methods to estimate soil sorption coefficients. An analysis of the models has shown (a) low precision of water solubility and n-octanol/water partition data, (b) varieties of quantitative models describing the relationship between the soil sorption and above-mentioned properties, and (c) violations of some basic statistical laws when these quantitative models were developed. During the last 5 years considerable efforts were made to develop nonempirical models that are free of errors imminent to all models based on empirical variables. Thus far molecular topology has been shown to be the most successful structural property for describing and predicting soil sorption coefficients. The first-order molecular connectivity index was demonstrated to correlate extremely well with the soil sorption coefficients of polycyclic aromatic hydrocarbons (PAHs), alkylbenzenes, chlorobenzenes, chlorinated alkanes and alkenes, heterocyclic and heterosubstituted PAHs, and halogenated phenols. The average difference between predicted and observed soil sorption coefficients is only 0.2 on the logarithmic scale (corresponding to a factor of 1.5). A comparison of the molecular connectivity model with the empirical models described earlier shows that the former is superior in

  14. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code

    NASA Technical Reports Server (NTRS)

    Freeh, Josh

    2003-01-01

    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  15. A quantitative evaluation of models for Aegean crustal deformation

    NASA Astrophysics Data System (ADS)

    Nyst, M.; Thatcher, W.

    2003-04-01

    Modeling studies of eastern Mediterranean tectonics show that Aegean deformation is mainly determined by WSW directed expulsion of Anatolia and SW directed extension due to roll-back of African lithosphere along the Hellenic trench. How motion is transferred across the Aegean remains a subject of debate. The two most widely used hypotheses for Aegean tectonics assert fundamentally different mechanisms. The first model describes deformation as a result of opposing rotations of two rigid microplates separated by a zone of extension. In the second model most motion is accommodated by shear on a series of dextral faults and extension on graben systems. These models make different quantitative predictions for the crustal deformation field that can be tested by a new, spatially dense GPS velocity data set. To convert the GPS data into crustal deformation parameters we use different methods to model complementary aspects of crustal deformation. We parameterize the main fault and plate boundary structures of both models and produce representations for the crustal deformation field that range from purely rigid rotations of microplates, via interacting, elastically deforming blocks separated by crustal faults to a continuous velocity gradient field. Critical evaluation of these models indicates strengths and limitations of each and suggests new measurements for further refining understanding of present-day Aegean tectonics.

  16. A QUANTITATIVE MODEL OF ERROR ACCUMULATION DURING PCR AMPLIFICATION

    PubMed Central

    Pienaar, E; Theron, M; Nelson, M; Viljoen, HJ

    2006-01-01

    The amplification of target DNA by the polymerase chain reaction (PCR) produces copies which may contain errors. Two sources of errors are associated with the PCR process: (1) editing errors that occur during DNA polymerase-catalyzed enzymatic copying and (2) errors due to DNA thermal damage. In this study a quantitative model of error frequencies is proposed and the role of reaction conditions is investigated. The errors which are ascribed to the polymerase depend on the efficiency of its editing function as well as the reaction conditions; specifically the temperature and the dNTP pool composition. Thermally induced errors stem mostly from three sources: A+G depurination, oxidative damage of guanine to 8-oxoG and cytosine deamination to uracil. The post-PCR modifications of sequences are primarily due to exposure of nucleic acids to elevated temperatures, especially if the DNA is in a single-stranded form. The proposed quantitative model predicts the accumulation of errors over the course of a PCR cycle. Thermal damage contributes significantly to the total errors; therefore consideration must be given to thermal management of the PCR process. PMID:16412692

  17. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  18. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety. PMID:20055976

  19. Quantitative Analysis of Cancer Metastasis using an Avian Embryo Model

    PubMed Central

    Palmer, Trenis D.; Lewis, John; Zijlstra, Andries

    2011-01-01

    During metastasis cancer cells disseminate from the primary tumor, invade into surrounding tissues, and spread to distant organs. Metastasis is a complex process that can involve many tissue types, span variable time periods, and often occur deep within organs, making it difficult to investigate and quantify. In addition, the efficacy of the metastatic process is influenced by multiple steps in the metastatic cascade making it difficult to evaluate the contribution of a single aspect of tumor cell behavior. As a consequence, metastasis assays are frequently performed in experimental animals to provide a necessarily realistic context in which to study metastasis. Unfortunately, these models are further complicated by their complex physiology. The chick embryo is a unique in vivo model that overcomes many limitations to studying metastasis, due to the accessibility of the chorioallantoic membrane (CAM), a well-vascularized extra-embryonic tissue located underneath the eggshell that is receptive to the xenografting of tumor cells (figure 1). Moreover, since the chick embryo is naturally immunodeficient, the CAM readily supports the engraftment of both normal and tumor tissues. Most importantly, the avian CAM successfully supports most cancer cell characteristics including growth, invasion, angiogenesis, and remodeling of the microenvironment. This makes the model exceptionally useful for the investigation of the pathways that lead to cancer metastasis and to predict the response of metastatic cancer to new potential therapeutics. The detection of disseminated cells by species-specific Alu PCR makes it possible to quantitatively assess metastasis in organs that are colonized by as few as 25 cells. Using the Human Epidermoid Carcinoma cell line (HEp3) we use this model to analyze spontaneous metastasis of cancer cells to distant organs, including the chick liver and lung. Furthermore, using the Alu-PCR protocol we demonstrate the sensitivity and reproducibility of the

  20. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    NASA Astrophysics Data System (ADS)

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-01

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model [1], using the finite-difference electromagnetic Vorpal/Vsim software [2]. This model has been augmented with a non-linear rf-sheath sub-grid model [3], which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves [4] in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques [5] to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion

  1. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    SciTech Connect

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-12

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model, using the finite-difference electromagnetic Vorpal/Vsim software. This model has been augmented with a non-linear rf-sheath sub-grid model, which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion, existing at the

  2. Existence of vortices in a self-dual gauged linear sigma model and its singular limit

    NASA Astrophysics Data System (ADS)

    Kim, Namkwon

    2006-03-01

    We study rigorously the static (2 + 1)D gauged linear sigma model introduced by Schroers. Analysing the governing system of partial differential equations, we show the existence of energy finite vortices under the partially broken symmetry on R2 with some conditions consistent with the necessary conditions given by Yang. Also, with a special choice of representation, we show that the gauged O(3) sigma model is a singular limit of the gauged linear sigma model.

  3. A quantitative evaluation of the AVITEWRITE model of handwriting learning.

    PubMed

    Paine, R W; Grossberg, S; Van Gemmert, A W A

    2004-12-01

    Much sensory-motor behavior develops through imitation, as during the learning of handwriting by children. Such complex sequential acts are broken down into distinct motor control synergies, or muscle groups, whose activities overlap in time to generate continuous, curved movements that obey an inverse relation between curvature and speed. The adaptive vector integration to endpoint handwriting (AVITEWRITE) model of Grossberg and Paine (2000) [A neural model of corticocerebellar interactions during attentive imitation and predictive learning of sequential handwriting movements. Neural Networks, 13, 999-1046] addressed how such complex movements may be learned through attentive imitation. The model suggested how parietal and motor cortical mechanisms, such as difference vector encoding, interact with adaptively-timed, predictive cerebellar learning during movement imitation and predictive performance. Key psychophysical and neural data about learning to make curved movements were simulated, including a decrease in writing time as learning progresses; generation of unimodal, bell-shaped velocity profiles for each movement synergy; size scaling with isochrony, and speed scaling with preservation of the letter shape and the shapes of the velocity profiles; an inverse relation between curvature and tangential velocity; and a two-thirds power law relation between angular velocity and curvature. However, the model learned from letter trajectories of only one subject, and only qualitative kinematic comparisons were made with previously published human data. The present work describes a quantitative test of AVITEWRITE through direct comparison of a corpus of human handwriting data with the model's performance when it learns by tracing the human trajectories. The results show that model performance was variable across the subjects, with an average correlation between the model and human data of 0.89+/-0.10. The present data from simulations using the AVITEWRITE model

  4. Adapting existing models of highly contagious diseases to countries other than their country of origin.

    PubMed

    Dubé, C; Sanchez, J; Reeves, A

    2011-08-01

    Many countries do not have the resources to develop epidemiological models of animal diseases. As a result, it is tempting to use models developed in other countries. However, an existing model may need to be adapted in order for it to be appropriately applied in a country, region, or situation other than that for which it was originally developed. The process of adapting a model has a number of benefits for both model builders and model users. For model builders, it provides insight into the applicability of their model and potentially the opportunity to obtain data for operational validation of components of their model. For users, it is a chance to think about the infection transmission process in detail, to review the data available for modelling, and to learn the principles of epidemiological modelling. Various issues must be addressed when considering adapting a model. Most critically, the assumptions and purpose behind the model must be thoroughly understood, so that new users can determine its suitability for their situation. The process of adapting a model might simply involve changing existing model parameter values (for example, to better represent livestock demographics in a country or region), or might require more substantial (and more labour-intensive) changes to the model code and conceptual model. Adapting a model is easier if the model has a user-friendly interface and easy-to-read user documentation. In addition, models built as frameworks within which disease processes and livestock demographics and contacts are flexible are good candidates for technology transfer projects, which lead to long-term collaborations. PMID:21961228

  5. Quantitative model of the growth of floodplains by vertical accretion

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2000-01-01

    A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.

  6. Discrete modeling of hydraulic fracturing processes in a complex pre-existing fracture network

    NASA Astrophysics Data System (ADS)

    Kim, K.; Rutqvist, J.; Nakagawa, S.; Houseworth, J. E.; Birkholzer, J. T.

    2015-12-01

    Hydraulic fracturing and stimulation of fracture networks are widely used by the energy industry (e.g., shale gas extraction, enhanced geothermal systems) to increase permeability of geological formations. Numerous analytical and numerical models have been developed to help understand and predict the behavior of hydraulically induced fractures. However, many existing models assume simple fracturing scenarios with highly idealized fracture geometries (e.g., propagation of a single fracture with assumed shapes in a homogeneous medium). Modeling hydraulic fracture propagation in the presence of natural fractures and homogeneities can be very challenging because of the complex interactions between fluid, rock matrix, and rock interfaces, as well as the interactions between propagating fractures and pre-existing natural fractures. In this study, the TOUGH-RBSN code for coupled hydro-mechanical modeling is utilized to simulate hydraulic fracture propagation and its interaction with pre-existing fracture networks. The simulation tool combines TOUGH2, a simulator of subsurface multiphase flow and mass transport based on the finite volume approach, with the implementation of a lattice modeling approach for geomechanical and fracture-damage behavior, named Rigid-Body-Spring Network (RBSN). The discrete fracture network (DFN) approach is facilitated in the Voronoi discretization via a fully automated modeling procedure. The numerical program is verified through a simple simulation for single fracture propagation, in which the resulting fracture geometry is compared to an analytical solution for given fracture length and aperture. Subsequently, predictive simulations are conducted for planned laboratory experiments using rock-analogue (soda-lime glass) samples containing a designed, pre-existing fracture network. The results of a preliminary simulation demonstrate selective fracturing and fluid infiltration along the pre-existing fractures, with additional fracturing in part

  7. A Pleiotropic Nonadditive Model of Variation in Quantitative Traits

    PubMed Central

    Caballero, A.; Keightley, P. D.

    1994-01-01

    A model of mutation-selection-drift balance incorporating pleiotropic and dominance effects of new mutations on quantitative traits and fitness is investigated and used to predict the amount and nature of genetic variation maintained in segregating populations. The model is based on recent information on the joint distribution of mutant effects on bristle traits and fitness in Drosophila melanogaster from experiments on the accumulation of spontaneous and P element-induced mutations. These experiments suggest a leptokurtic distribution of effects with an intermediate correlation between effects on the trait and fitness. Mutants of large effect tend to be partially recessive while those with smaller effect are on average additive, but apparently with very variable gene action. The model is parameterized with two different sets of information derived from P element insertion and spontaneous mutation data, though the latter are not fully known. They differ in the number of mutations per generation which is assumed to affect the trait. Predictions of the variance maintained for bristle number assuming parameters derived from effects of P element insertions, in which the proportion of mutations with an effect on the trait is small, fit reasonably well with experimental observations. The equilibrium genetic variance is nearly independent of the degree of dominance of new mutations. Heritabilities of between 0.4 and 0.6 are predicted with population sizes from 10(4) to 10(6), and most of the variance for the metric trait in segregating populations is due to a small proportion of mutations (about 1% of the total number) with neutral or nearly neutral effects on fitness and intermediate effects on the trait (0.1-0.5σ(P)). Much of the genetic variance is contributed by recessive or partially recessive mutants, but only a small proportion (about 10%) of the genetic variance is dominance variance. The amount of apparent selection on the trait itself generated by the model is

  8. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    PubMed

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  9. Towards Quantitative Spatial Models of Seabed Sediment Composition

    PubMed Central

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom’s parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  10. Sensitivity of quantitative sensory models to morphine analgesia in humans

    PubMed Central

    Olesen, Anne Estrup; Brock, Christina; Sverrisdóttir, Eva; Larsen, Isabelle Myriam; Drewes, Asbjørn Mohr

    2014-01-01

    Introduction Opioid analgesia can be explored with quantitative sensory testing, but most investigations have used models of phasic pain, and such brief stimuli may be limited in the ability to faithfully simulate natural and clinical painful experiences. Therefore, identification of appropriate experimental pain models is critical for our understanding of opioid effects with the potential to improve treatment. Objectives The aim was to explore and compare various pain models to morphine analgesia in healthy volunteers. Methods The study was a double-blind, randomized, two-way crossover study. Thirty-nine healthy participants were included and received morphine 30 mg (2 mg/mL) as oral solution or placebo. To cover both tonic and phasic stimulations, a comprehensive multi-modal, multi-tissue pain-testing program was performed. Results Tonic experimental pain models were sensitive to morphine analgesia compared to placebo: muscle pressure (F=4.87, P=0.03), bone pressure (F=3.98, P=0.05), rectal pressure (F=4.25, P=0.04), and the cold pressor test (F=25.3, P<0.001). Compared to placebo, morphine increased tolerance to muscle stimulation by 14.07%; bone stimulation by 9.72%; rectal mechanical stimulation by 20.40%, and reduced pain reported during the cold pressor test by 9.14%. In contrast, the more phasic experimental pain models were not sensitive to morphine analgesia: skin heat, rectal electrical stimulation, or rectal heat stimulation (all P>0.05). Conclusion Pain models with deep tonic stimulation including C fiber activation and and/or endogenous pain modulation were more sensitive to morphine analgesia. To avoid false negative results in future studies, we recommend inclusion of reproducible tonic pain models in deep tissues, mimicking clinical pain to a higher degree. PMID:25525384

  11. Quantitative determination of guggulsterone in existing natural populations of Commiphora wightii (Arn.) Bhandari for identification of germplasm having higher guggulsterone content.

    PubMed

    Kulhari, Alpana; Sheorayan, Arun; Chaudhury, Ashok; Sarkar, Susheel; Kalia, Rajwant K

    2015-01-01

    Guggulsterone is an aromatic steroidal ketonic compound obtained from vertical rein ducts and canals of bark of Commiphora wightii (Arn.) Bhandari (Family - Burseraceae). Owing to its multifarious medicinal and therapeutic values as well as its various other significant bioactivities, guggulsterone has high demand in pharmaceutical, perfumery and incense industries. More and more pharmaceutical and perfumery industries are showing interest in guggulsterone, therefore, there is a need for its quantitative determination in existing natural populations of C. wightii. Identification of elite germplasm having higher guggulsterone content can be multiplied through conventional or biotechnological means. In the present study an effort was made to estimate two isoforms of guggulsterone i.e. E and Z guggulsterone in raw exudates of 75 accessions of C. wightii collected from three states of North-western India viz. Rajasthan (19 districts), Haryana (4 districts) and Gujarat (3 districts). Extracted steroid rich fraction from stem samples was fractionated using reverse-phase preparative High Performance Liquid Chromatography (HPLC) coupled with UV/VIS detector operating at wavelength of 250 nm. HPLC analysis of stem samples of wild as well as cultivated plants showed that the concentration of E and Z isomers as well as total guggulsterone was highest in Rajasthan, as compared to Haryana and Gujarat states. Highest concentration of E guggulsterone (487.45 μg/g) and Z guggulsterone (487.68 μg/g) was found in samples collected from Devikot (Jaisalmer) and Palana (Bikaner) respectively, the two hyper-arid regions of Rajasthan, India. Quantitative assay was presented on the basis of calibration curve obtained from a mixture of standard E and Z guggulsterones with different validatory parameters including linearity, selectivity and specificity, accuracy, auto-injector, flow-rate, recoveries, limit of detection and limit of quantification (as per norms of International

  12. Existence of standard models of conic fibrations over non-algebraically-closed fields

    SciTech Connect

    Avilov, A A

    2014-12-31

    We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.

  13. Global existence for a model of inhomogeneous incompressible elastodynamics in 2D

    NASA Astrophysics Data System (ADS)

    Yin, Silu

    2016-05-01

    In this paper, we investigate a model of incompressible, isotropic, inhomogeneous elastodynamics in two space dimensions, inspired by Lei in [18]. We prove the global existence for this Cauchy problem with sufficiently small initial displacement and small density disturbance around constant.

  14. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  15. Existence of global weak solution for a reduced gravity two and a half layer model

    SciTech Connect

    Guo, Zhenhua Li, Zilai Yao, Lei

    2013-12-15

    We investigate the existence of global weak solution to a reduced gravity two and a half layer model in one-dimensional bounded spatial domain or periodic domain. Also, we show that any possible vacuum state has to vanish within finite time, then the weak solution becomes a unique strong one.

  16. Leveraging an existing data warehouse to annotate workflow models for operations research and optimization.

    PubMed

    Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O

    2008-01-01

    Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse. PMID:18999220

  17. Existence of Limit Cycles in the Solow Model with Delayed-Logistic Population Growth

    PubMed Central

    2014-01-01

    This paper is devoted to the existence and stability analysis of limit cycles in a delayed mathematical model for the economy growth. Specifically the Solow model is further improved by inserting the time delay into the logistic population growth rate. Moreover, by choosing the time delay as a bifurcation parameter, we prove that the system loses its stability and a Hopf bifurcation occurs when time delay passes through critical values. Finally, numerical simulations are carried out for supporting the analytical results. PMID:24592147

  18. Monitoring with Trackers Based on Semi-Quantitative Models

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1997-01-01

    In three years of NASA-sponsored research preceding this project, we successfully developed a technology for: (1) building qualitative and semi-quantitative models from libraries of model-fragments, (2) simulating these models to predict future behaviors with the guarantee that all possible behaviors are covered, (3) assimilating observations into behaviors, shrinking uncertainty so that incorrect models are eventually refuted and correct models make stronger predictions for the future. In our object-oriented framework, a tracker is an object which embodies the hypothesis that the available observation stream is consistent with a particular behavior of a particular model. The tracker maintains its own status (consistent, superceded, or refuted), and answers questions about its explanation for past observations and its predictions for the future. In the MIMIC approach to monitoring of continuous systems, a number of trackers are active in parallel, representing alternate hypotheses about the behavior of a system. This approach is motivated by the need to avoid 'system accidents' [Perrow, 1985] due to operator fixation on a single hypothesis, as for example at Three Mile Island. As we began to address these issues, we focused on three major research directions that we planned to pursue over a three-year project: (1) tractable qualitative simulation, (2) semiquantitative inference, and (3) tracking set management. Unfortunately, funding limitations made it impossible to continue past year one. Nonetheless, we made major progress in the first two of these areas. Progress in the third area as slower because the graduate student working on that aspect of the project decided to leave school and take a job in industry. I enclosed a set of abstract of selected papers on the work describe below. Several papers that draw on the research supported during this period appeared in print after the grant period ended.

  19. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  20. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  1. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    NASA Technical Reports Server (NTRS)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  2. Quantitative rubber sheet models of gravitation wells using Spandex

    NASA Astrophysics Data System (ADS)

    White, Gary

    2008-04-01

    Long a staple of introductory treatments of general relativity, the rubber sheet model exhibits Wheeler's concise summary---``Matter tells space-time how to curve and space-time tells matter how to move''---very nicely. But what of the quantitative aspects of the rubber sheet model: how far can the analogy be pushed? We show^1 that when a mass M is suspended from the center of an otherwise unstretched elastic sheet affixed to a circular boundary it exhibits a distortion far from the center given by h = A*(M*r^2)^1/3 . Here, as might be expected, h and r are the vertical and axial distances from the center, but this result is not the expected logarithmic form of 2-D solutions to LaPlace's equation (the stretched drumhead). This surprise has a natural explanation and is confirmed experimentally with Spandex as the medium, and its consequences for general rubber sheet models are pursued. ^1``The shape of `the Spandex' and orbits upon its surface'', American Journal of Physics, 70, 48-52 (2002), G. D. White and M. Walker. See also the comment by Don S. Lemons and T. C. Lipscombe, also in AJP, 70, 1056-1058 (2002).

  3. Quantitative Modeling of the Alternative Pathway of the Complement System.

    PubMed

    Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection. PMID

  4. Quantitative Modeling of the Alternative Pathway of the Complement System

    PubMed Central

    Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection. PMID

  5. Quantitative modeling of the terminal differentiation of B cells and mechanisms of lymphomagenesis

    PubMed Central

    Martínez, María Rodríguez; Corradin, Alberto; Klein, Ulf; Álvarez, Mariano Javier; Toffolo, Gianna M.; di Camillo, Barbara; Califano, Andrea; Stolovitzky, Gustavo A.

    2012-01-01

    Mature B-cell exit from germinal centers is controlled by a transcriptional regulatory module that integrates antigen and T-cell signals and, ultimately, leads to terminal differentiation into memory B cells or plasma cells. Despite a compact structure, the module dynamics are highly complex because of the presence of several feedback loops and self-regulatory interactions, and understanding its dysregulation, frequently associated with lymphomagenesis, requires robust dynamical modeling techniques. We present a quantitative kinetic model of three key gene regulators, BCL6, IRF4, and BLIMP, and use gene expression profile data from mature human B cells to determine appropriate model parameters. The model predicts the existence of two different hysteresis cycles that direct B cells through an irreversible transition toward a differentiated cellular state. By synthetically perturbing the interactions in this network, we can elucidate known mechanisms of lymphomagenesis and suggest candidate tumorigenic alterations, indicating that the model is a valuable quantitative tool to simulate B-cell exit from the germinal center under a variety of physiological and pathological conditions. PMID:22308355

  6. Quantitative dual-probe microdialysis: mathematical model and analysis.

    PubMed

    Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles

    2002-04-01

    Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis. PMID:12067242

  7. Quantitative Model of microRNA-mRNA interaction

    NASA Astrophysics Data System (ADS)

    Noorbakhsh, Javad; Lang, Alex; Mehta, Pankaj

    2012-02-01

    MicroRNAs are short RNA sequences that regulate gene expression and protein translation by binding to mRNA. Experimental data reveals the existence of a threshold linear output of protein based on the expression level of microRNA. To understand this behavior, we propose a mathematical model of the chemical kinetics of the interaction between mRNA and microRNA. Using this model we have been able to quantify the threshold linear behavior. Furthermore, we have studied the effect of internal noise, showing the existence of an intermediary regime where the expression level of mRNA and microRNA has the same order of magnitude. In this crossover regime the mRNA translation becomes sensitive to small changes in the level of microRNA, resulting in large fluctuations in protein levels. Our work shows that chemical kinetics parameters can be quantified by studying protein fluctuations. In the future, studying protein levels and their fluctuations can provide a powerful tool to study the competing endogenous RNA hypothesis (ceRNA), in which mRNA crosstalk occurs due to competition over a limited pool of microRNAs.

  8. Quantitative comparisons of analogue models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  9. Quantitative phase-field modeling for boiling phenomena

    NASA Astrophysics Data System (ADS)

    Badillo, Arnoldo

    2012-10-01

    A phase-field model is developed for quantitative simulation of bubble growth in the diffusion-controlled regime. The model accounts for phase change and surface tension effects at the liquid-vapor interface of pure substances with large property contrast. The derivation of the model follows a two-fluid approach, where the diffuse interface is assumed to have an internal microstructure, defined by a sharp interface. Despite the fact that phases within the diffuse interface are considered to have their own velocities and pressures, an averaging procedure at the atomic scale, allows for expressing all the constitutive equations in terms of mixture quantities. From the averaging procedure and asymptotic analysis of the model, nonconventional terms appear in the energy and phase-field equations to compensate for the variation of the properties across the diffuse interface. Without these new terms, no convergence towards the sharp-interface model can be attained. The asymptotic analysis also revealed a very small thermal capillary length for real fluids, such as water, that makes impossible for conventional phase-field models to capture bubble growth in the millimeter range size. For instance, important phenomena such as bubble growth and detachment from a hot surface could not be simulated due to the large number of grids points required to resolve all the scales. Since the shape of the liquid-vapor interface is primarily controlled by the effects of an isotropic surface energy (surface tension), a solution involving the elimination of the curvature from the phase-field equation is devised. The elimination of the curvature from the phase-field equation changes the length scale dominating the phase change from the thermal capillary length to the thickness of the thermal boundary layer, which is several orders of magnitude larger. A detailed analysis of the phase-field equation revealed that a split of this equation into two independent parts is possible for system sizes

  10. Quantitative property-structural relation modeling on polymeric dielectric materials

    NASA Astrophysics Data System (ADS)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  11. Existence of traveling wave solutions in a diffusive predator-prey model.

    PubMed

    Huang, Jianhua; Lu, Gang; Ruan, Shigui

    2003-02-01

    We establish the existence of traveling front solutions and small amplitude traveling wave train solutions for a reaction-diffusion system based on a predator-prey model with Holling type-II functional response. The traveling front solutions are equivalent to heteroclinic orbits in R(4) and the small amplitude traveling wave train solutions are equivalent to small amplitude periodic orbits in R(4). The methods used to prove the results are the shooting argument and the Hopf bifurcation theorem. PMID:12567231

  12. Quantitative phase-field modeling for wetting phenomena.

    PubMed

    Badillo, Arnoldo

    2015-03-01

    A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates. PMID:25871200

  13. An overview of existing modeling tools making use of model checking in the analysis of biochemical networks

    PubMed Central

    Carrillo, Miguel; Góngora, Pedro A.; Rosenblueth, David A.

    2012-01-01

    Model checking is a well-established technique for automatically verifying complex systems. Recently, model checkers have appeared in computer tools for the analysis of biochemical (and gene regulatory) networks. We survey several such tools to assess the potential of model checking in computational biology. Next, our overview focuses on direct applications of existing model checkers, as well as on algorithms for biochemical network analysis influenced by model checking, such as those using binary decision diagrams (BDDs) or Boolean-satisfiability solvers. We conclude with advantages and drawbacks of model checking for the analysis of biochemical networks. PMID:22833747

  14. Normal fault growth above pre-existing structures: insights from discrete element modelling

    NASA Astrophysics Data System (ADS)

    Wrona, Thilo; Finch, Emma; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Phillips, Thomas

    2016-04-01

    In extensional systems, pre-existing structures such as shear zones may affect the growth, geometry and location of normal faults. Recent seismic reflection-based observations from the North Sea suggest that shear zones not only localise deformation in the host rock, but also in the overlying sedimentary succession. While pre-existing weaknesses are known to localise deformation in the host rock, their effect on deformation in the overlying succession is less well understood. Here, we use 3-D discrete element modelling to determine if and how kilometre-scale shear zones affect normal fault growth in the overlying succession. Discrete element models use a large number of interacting particles to describe the dynamic evolution of complex systems. The technique has therefore been applied to describe fault and fracture growth in a variety of geological settings. We model normal faulting by extending a 60×60×30 km crustal rift-basin model including brittle and ductile interactions and gravitation and isostatic forces by 30%. An inclined plane of weakness which represents a pre-existing shear zone is introduced in the lower section of the upper brittle layer at the start of the experiment. The length, width, orientation and dip of the weak zone are systematically varied between experiments to test how these parameters control the geometric and kinematic development of overlying normal fault systems. Consistent with our seismic reflection-based observations, our results show that strain is indeed localised in and above these weak zones. In the lower brittle layer, normal faults nucleate, as expected, within the zone of weakness and control the initiation and propagation of neighbouring faults. Above this, normal faults nucleate throughout the overlying strata where their orientations are strongly influenced by the underlying zone of weakness. These results challenge the notion that overburden normal faults simply form due to reactivation and upwards propagation of pre-existing

  15. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits

    PubMed Central

    Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-01-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI’s Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes. PMID:27104857

  16. Quantitative modeling of fluorescent emission in photonic crystals

    NASA Astrophysics Data System (ADS)

    Gutmann, Johannes; Zappe, Hans; Goldschmidt, Jan Christoph

    2013-11-01

    Photonic crystals affect the photon emission of embedded emitters due to an altered local density of photon states (LDOS). We review the calculation of the LDOS from eigenmodes in photonic crystals and propose a rate equation model for fluorescent emitters to determine the changes in emission induced by the LDOS. We show how to calculate the modifications of three experimentally accessible characteristics: emission spectrum (spectral redistribution), emitter quantum yield, and fluorescence lifetime. As an example, we present numerical results for the emission of the dye Rhodamine B inside an opal photonic crystal. For such photonic crystals with small permittivity contrast, the LDOS is only weakly modified, resulting in rather small changes. We point out that in experiments, however, usually only part of the emitted light is detected, which can have a very different spectral distribution (e.g., due to a photonic band gap in the direction of detection). We demonstrate the calculation of this detected spectrum for a typical measurement setup. With this reasoning, we explain the previously not fully understood experimental observation that strong spectral modifications occurred, while at the same time only small changes in lifetime were found. With our approach, the mentioned effects can be quantitatively calculated for fluorescent emitters in any photonic crystal.

  17. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.

    PubMed

    Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-04-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes. PMID:27104857

  18. Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model

    SciTech Connect

    Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.

    2011-10-01

    We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.

  19. Uncertainty in Quantitative Precipitation Estimates and Forecasts in a Hydrologic Modeling Context (Invited)

    NASA Astrophysics Data System (ADS)

    Gourley, J. J.; Kirstetter, P.; Hong, Y.; Hardy, J.; Flamig, Z.

    2013-12-01

    This study presents a methodology to account for uncertainty in radar-based rainfall rate estimation using NOAA/NSSL's Multi-Radar Multisensor (MRMS) products. The focus of the study in on flood forecasting, including flash floods, in ungauged catchments throughout the conterminous US. An error model is used to derive probability distributions of rainfall rates that explicitly accounts for rain typology and uncertainty in the reflectivity-to-rainfall relationships. This approach preserves the fine space/time sampling properties (2 min/1 km) of the radar and conditions probabilistic quantitative precipitation estimates (PQPE) on the rain rate and rainfall type. Uncertainty in rainfall amplitude is the primary factor that is accounted for in the PQPE development. Additional uncertainties due to rainfall structures, locations, and timing must be considered when using quantitative precipitation forecast (QPF) products as forcing to a hydrologic model. A new method will be presented that shows how QPF ensembles are used in a hydrologic modeling context to derive probabilistic flood forecast products. This method considers the forecast rainfall intensity and morphology superimposed on pre-existing hydrologic conditions to identify basin scales that are most at risk.

  20. Evaluation Between Existing and Improved CCF Modeling Using the NRC SPAR Models

    SciTech Connect

    James K. Knudsen

    2010-06-01

    Abstract: The NRC SPAR models currently employ the alpha factor common cause failure (CCF) methodology and model CCF for a group of redundant components as a single “rolled-up” basic event. These SPAR models will be updated to employ a more computationally intensive and accurate approach by expanding the CCF basic events for all active components to include all terms that appear in the Basic Parameter Model (BPM). A discussion is provided to detail the differences between the rolled-up common cause group (CCG) and expanded BPM adjustment concepts based on differences in core damage frequency and individual component importance measures. Lastly, a hypothetical condition is evaluated with a SPAR model to show the difference in results between the current adjustment method (rolled-up CCF events) and the newer method employing all of the expanded terms in the BPM. The event evaluation on the SPAR model employing the expanded terms will be solved using the graphical evaluation module (GEM) and the proposed method discussed in Reference 1.

  1. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models

    PubMed Central

    Shimayoshi, Takao; Cha, Chae Young; Amano, Akira

    2015-01-01

    Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics. PMID:26091413

  2. Numerical Modelling of Extended Leak-Off Test with a Pre-Existing Fracture

    NASA Astrophysics Data System (ADS)

    Lavrov, A.; Larsen, I.; Bauer, A.

    2016-04-01

    Extended leak-off test (XLOT) is one of the few techniques available for stress measurements in oil and gas wells. Interpretation of the test is often difficult since the results depend on a multitude of factors, including the presence of natural or drilling-induced fractures in the near-well area. Coupled numerical modelling of XLOT has been performed to investigate the pressure behaviour during the flowback phase as well as the effect of a pre-existing fracture on the test results in a low-permeability formation. Essential features of XLOT known from field measurements are captured by the model, including the saw-tooth shape of the pressure vs injected volume curve, and the change of slope in the pressure vs time curve during flowback used by operators as an indicator of the bottomhole pressure reaching the minimum in situ stress. Simulations with a pre-existing fracture running from the borehole wall in the radial direction have revealed that the results of XLOT are quite sensitive to the orientation of the pre-existing fracture. In particular, the fracture initiation pressure and the formation breakdown pressure increase steadily with decreasing angle between the fracture and the minimum in situ stress. Our findings seem to invalidate the use of the fracture initiation pressure and the formation breakdown pressure for stress measurements or rock strength evaluation purposes.

  3. Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy

    ERIC Educational Resources Information Center

    Smith, Rachel; Cantrell, Kevin

    2007-01-01

    Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.

  4. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments. PMID:27505294

  5. Local Existence of Weak Solutions to Kinetic Models of Granular Media

    NASA Astrophysics Data System (ADS)

    Agueh, Martial

    2016-08-01

    We prove in any dimension {d ≥q 1} a local in time existence of weak solutions to the Cauchy problem for the kinetic equation of granular media, partial_t f+v\\cdot nabla_x f = {div}_v[f(nabla W *_v f)] when the initial data are nonnegative, integrable and bounded functions with compact support in velocity, and the interaction potential {W} is a {C^2({{R}}^d)} radially symmetric convex function. Our proof is constructive and relies on a splitting argument in position and velocity, where the spatially homogeneous equation is interpreted as the gradient flow of a convex interaction energy with respect to the quadratic Wasserstein distance. Our result generalizes the local existence result obtained by Benedetto et al. (RAIRO Modél Math Anal Numér 31(5):615-641, 1997) on the one-dimensional model of this equation for a cubic power-law interaction potential.

  6. Inheritance of pre-existing weakness in continental breakup: 3D numerical modeling

    NASA Astrophysics Data System (ADS)

    Liao, Jie; Gerya, Taras

    2013-04-01

    The whole process of continental rifting to seafloor spreading is one of the most important plate tectonics on the earth. There are many questions remained related to this process, most of which are poorly understood, such as how continental rifting transformed into seafloor spreading? How the curved oceanic ridge developed from a single straight continental rift? How the pre-existing weakness in either crust or lithospheric mantle individually influences the continental rifting and oceanic spreading? By employing the state-of-the-art three-dimensional thermomechanical-coupled numerical code (using Eulerian-Lagrangian finite-difference method and marker-in-cell technic) (Gerya and Yuen, 2007), which can model long-term plate extension and large strains, we studied the whole process of continental rifting to seafloor spreading based on the following question: How the pre-existing lithospheric weak zone influences the continental breakup? Continental rifts do not occur randomly, but like to follow the pre-existing weakness (such as fault zones, suture zones, failed rifts, and other tectonic boundaries) in the lithosphere, for instance, the western branch of East African Rift formed in the relatively weak mobile belts along the curved western border of Tanzanian craton (Corti et al., 2007; Nyblade and Brazier, 2002), the Main Ethiopian Rift developed within the Proterozoic mobile belt which is believed to represent a continental collision zone (Keranen and Klemperer, 2008),the Baikal rift formed along the suture between Siberian craton and Sayan-Baikal folded belt (Chemenda et al., 2002). The early stage formed rift can be a template for the future rift development and continental breakup (Keranen and Klemperer, 2008). Lithospheric weakness can either reduce the crustal strength or mantle strength, and leads to the crustal or mantle necking (Dunbar and Sawyer, 1988), which plays an important role on controlling the continental breakup patterns, such as controlling the

  7. From sample to signal in laser-induced breakdown spectroscopy: An experimental assessment of existing algorithms and theoretical modeling approaches

    NASA Astrophysics Data System (ADS)

    Herrera, Kathleen Kate

    In recent years, laser-induced breakdown spectroscopy (LIBS) has become an increasingly popular technique for many diverse applications. This is mainly due to its numerous attractive features including minimal to no sample preparation, minimal sample invasiveness, sample versatility, remote detection capability and simultaneous multi-elemental capability. However, most of LIBS applications are limited to semi-quantitative or relative analysis due to the difficulty in finding matrix-matched standards or a constant reference component in the system for calibration purposes. Therefore, methods which do not require the use of reference standards, hence, standard-free, are highly desired. In this research, a general LIBS system was constructed, calibrated and optimized. The corresponding instrumental function and relative spectral efficiency of the detection system were also investigated. In addition, development of a spectral acquisition method was necessary so that data in the wide spectral range from 220 to 700 nm may be obtained using a non-echelle detection system. This requires multiple acquisitions of successive spectral windows and splicing the windows together with optimum overlap using an in-house program written in Q-basic. Two existing standard-free approaches, the calibration-free LIBS (CF-LIBS) technique and the Monte Carlo simulated annealing optimization modeling algorithm for LIBS (MC-LIBS), were experimentally evaluated in this research. The CF-LIBS approach, which is based on the Boltzmann plot method, is used to directly evaluate the plasma temperature, electron number density and relative concentrations of species present in a given sample without the need for reference standards. In the second approach, the initial value problem is solved based on the model of a radiative plasma expanding into vacuum. Here, the prediction of the initial plasma conditions (i.e., temperature and elemental number densities) is achieved by a step-wise Monte Carlo

  8. Development of an Experimental Model of Diabetes Co-Existing with Metabolic Syndrome in Rats

    PubMed Central

    Suman, Rajesh Kumar; Ray Mohanty, Ipseeta; Borde, Manjusha K.; Maheshwari, Ujwala; Deshmukh, Y. A.

    2016-01-01

    Background. The incidence of metabolic syndrome co-existing with diabetes mellitus is on the rise globally. Objective. The present study was designed to develop a unique animal model that will mimic the pathological features seen in individuals with diabetes and metabolic syndrome, suitable for pharmacological screening of drugs. Materials and Methods. A combination of High-Fat Diet (HFD) and low dose of streptozotocin (STZ) at 30, 35, and 40 mg/kg was used to induce metabolic syndrome in the setting of diabetes mellitus in Wistar rats. Results. The 40 mg/kg STZ produced sustained hyperglycemia and the dose was thus selected for the study to induce diabetes mellitus. Various components of metabolic syndrome such as dyslipidemia {(increased triglyceride, total cholesterol, LDL cholesterol, and decreased HDL cholesterol)}, diabetes mellitus (blood glucose, HbA1c, serum insulin, and C-peptide), and hypertension {systolic blood pressure} were mimicked in the developed model of metabolic syndrome co-existing with diabetes mellitus. In addition to significant cardiac injury, atherogenic index, inflammation (hs-CRP), decline in hepatic and renal function were observed in the HF-DC group when compared to NC group rats. The histopathological assessment confirmed presence of edema, necrosis, and inflammation in heart, pancreas, liver, and kidney of HF-DC group as compared to NC. Conclusion. The present study has developed a unique rodent model of metabolic syndrome, with diabetes as an essential component. PMID:26880906

  9. Quantitative Models of the Dose-Response and Time Course of Inhalational Anthrax in Humans

    PubMed Central

    Schell, Wiley A.; Bulmahn, Kenneth; Walton, Thomas E.; Woods, Christopher W.; Coghill, Catherine; Gallegos, Frank; Samore, Matthew H.; Adler, Frederick R.

    2013-01-01

    Anthrax poses a community health risk due to accidental or intentional aerosol release. Reliable quantitative dose-response analyses are required to estimate the magnitude and timeline of potential consequences and the effect of public health intervention strategies under specific scenarios. Analyses of available data from exposures and infections of humans and non-human primates are often contradictory. We review existing quantitative inhalational anthrax dose-response models in light of criteria we propose for a model to be useful and defensible. To satisfy these criteria, we extend an existing mechanistic competing-risks model to create a novel Exposure–Infection–Symptomatic illness–Death (EISD) model and use experimental non-human primate data and human epidemiological data to optimize parameter values. The best fit to these data leads to estimates of a dose leading to infection in 50% of susceptible humans (ID50) of 11,000 spores (95% confidence interval 7,200–17,000), ID10 of 1,700 (1,100–2,600), and ID1 of 160 (100–250). These estimates suggest that use of a threshold to human infection of 600 spores (as suggested in the literature) underestimates the infectivity of low doses, while an existing estimate of a 1% infection rate for a single spore overestimates low dose infectivity. We estimate the median time from exposure to onset of symptoms (incubation period) among untreated cases to be 9.9 days (7.7–13.1) for exposure to ID50, 11.8 days (9.5–15.0) for ID10, and 12.1 days (9.9–15.3) for ID1. Our model is the first to provide incubation period estimates that are independently consistent with data from the largest known human outbreak. This model refines previous estimates of the distribution of early onset cases after a release and provides support for the recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses. PMID:24058320

  10. Existence and qualitative properties of travelling waves for an epidemiological model with mutations

    NASA Astrophysics Data System (ADS)

    Griette, Quentin; Raoul, Gaël

    2016-05-01

    In this article, we are interested in a non-monotonic system of logistic reaction-diffusion equations. This system of equations models an epidemic where two types of pathogens are competing, and a mutation can change one type into the other with a certain rate. We show the existence of travelling waves with minimal speed, which are usually non-monotonic. Then we provide a description of the shape of those constructed travelling waves, and relate them to some Fisher-KPP fronts with non-minimal speed.

  11. Existence and uniqueness of stabilized propagating wave segments in wave front interaction model

    NASA Astrophysics Data System (ADS)

    Guo, Jong-Shenq; Ninomiya, Hirokazu; Tsai, Je-Chiang

    2010-02-01

    Recent experimental studies of photosensitive Belousov-Zhabotinskii reaction have revealed the existence of propagating wave segments. The propagating wave segments are unstable, but can be stabilized by using a feedback control to continually adjust the excitability of the medium. Experimental studies also indicate that the locus of the size of a stabilized wave segment as a function of the excitability of the medium gives the excitability boundary for the existence of 2D wave patterns with free ends in excitable media. To study the properties of this boundary curve, we use the wave front interaction model proposed by Zykov and Showalter. This is equivalent to study a first order system of three ordinary differential equations which includes a singular nonlinearity. Using two different reduced first order systems of two ordinary differential equations, we first show the existence of wave segments for any given propagating velocity. Then the wave profiles can be classified into two types, namely, convex and non-convex types. More precisely, when the normalized propagating velocity is small, we show that the wave profile is of convex type, while the wave profile is of non-convex type when the normalized velocity is close to 1.

  12. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation

  13. Existence and exponential stability of positive almost periodic solution for Nicholson's blowflies models on time scales.

    PubMed

    Li, Yongkun; Li, Bing

    2016-01-01

    In this paper, we first give a new definition of almost periodic time scales, two new definitions of almost periodic functions on time scales and investigate some basic properties of them. Then, as an application, by using a fixed point theorem in Banach space and the time scale calculus theory, we obtain some sufficient conditions for the existence and exponential stability of positive almost periodic solutions for a class of Nicholson's blowflies models on time scales. Finally, we present an illustrative example to show the effectiveness of obtained results. Our results show that under a simple condition the continuous-time Nicholson's blowflies model and its discrete-time analogue have the same dynamical behaviors. PMID:27468397

  14. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    PubMed

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted. PMID:23702385

  15. Existence of the critical endpoint in the vector meson extended linear sigma model

    NASA Astrophysics Data System (ADS)

    Kovács, P.; Szép, Zs.; Wolf, Gy.

    2016-06-01

    The chiral phase transition of the strongly interacting matter is investigated at nonzero temperature and baryon chemical potential (μB) within an extended (2 +1 ) flavor Polyakov constituent quark-meson model that incorporates the effect of the vector and axial vector mesons. The effect of the fermionic vacuum and thermal fluctuations computed from the grand potential of the model is taken into account in the curvature masses of the scalar and pseudoscalar mesons. The parameters of the model are determined by comparing masses and tree-level decay widths with experimental values in a χ2-minimization procedure that selects between various possible assignments of scalar nonet states to physical particles. We examine the restoration of the chiral symmetry by monitoring the temperature evolution of condensates and the chiral partners' masses and of the mixing angles for the pseudoscalar η -η' and the corresponding scalar complex. We calculate the pressure and various thermodynamical observables derived from it and compare them to the continuum extrapolated lattice results of the Wuppertal-Budapest collaboration. We study the T -μB phase diagram of the model and find that a critical endpoint exists for parameters of the model, which give acceptable values of χ2.

  16. Model based prediction of the existence of the spontaneous cochlear microphonic

    NASA Astrophysics Data System (ADS)

    Ayat, Mohammad; Teal, Paul D.

    2015-12-01

    In the mammalian cochlea, self-sustaining oscillation of the basilar membrane in the cochlea can cause vibration of the ear drum, and produce spontaneous narrow-band air pressure fluctuations in the ear canal. These spontaneous fluctuations are known as spontaneous otoacoustic emissions. Small perturbations in feedback gain of the cochlear amplifier have been proposed to be the generation source of self-sustaining oscillations of the basilar membrane. We hypothesise that the self-sustaining oscillation resulting from small perturbations in feedback gain produce spontaneous potentials in the cochlea. We demonstrate that according to the results of the model, a measurable spontaneous cochlear microphonic must exist in the human cochlea. The existence of this signal has not yet been reported. However, this spontaneous electrical signal could play an important role in auditory research. Successful or unsuccessful recording of this signal will indicate whether previous hypotheses about the generation source of spontaneous otoacoustic emissions are valid or should be amended. In addition according to the proposed model spontaneous cochlear microphonic is basically an electrical analogue of spontaneous otoacoustic emissions. In certain experiments, spontaneous cochlear microphonic may be more easily detected near its generation site with proper electrical instrumentation than is spontaneous otoacoustic emission.

  17. What Are We Doing When We Translate from Quantitative Models?

    ERIC Educational Resources Information Center

    Critchfield, Thomas S.; Reed, Derek D.

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may…

  18. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models.

    PubMed

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-08-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality.However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management.We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee - varroa mite - virus interactions.We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes.Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions. PMID:24223431

  19. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models

    PubMed Central

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-01-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality. However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management. We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee – varroa mite – virus interactions. We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes. Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions. PMID:24223431

  20. Endoscopic skull base training using 3D printed models with pre-existing pathology.

    PubMed

    Narayanan, Vairavan; Narayanan, Prepageran; Rajagopalan, Raman; Karuppiah, Ravindran; Rahman, Zainal Ariff Abdul; Wormald, Peter-John; Van Hasselt, Charles Andrew; Waran, Vicknes

    2015-03-01

    Endoscopic base of skull surgery has been growing in acceptance in the recent past due to improvements in visualisation and micro instrumentation as well as the surgical maturing of early endoscopic skull base practitioners. Unfortunately, these demanding procedures have a steep learning curve. A physical simulation that is able to reproduce the complex anatomy of the anterior skull base provides very useful means of learning the necessary skills in a safe and effective environment. This paper aims to assess the ease of learning endoscopic skull base exposure and drilling techniques using an anatomically accurate physical model with a pre-existing pathology (i.e., basilar invagination) created from actual patient data. Five models of a patient with platy-basia and basilar invagination were created from the original MRI and CT imaging data of a patient. The models were used as part of a training workshop for ENT surgeons with varying degrees of experience in endoscopic base of skull surgery, from trainees to experienced consultants. The surgeons were given a list of key steps to achieve in exposing and drilling the skull base using the simulation model. They were then asked to list the level of difficulty of learning these steps using the model. The participants found the models suitable for learning registration, navigation and skull base drilling techniques. All participants also found the deep structures to be accurately represented spatially as confirmed by the navigation system. These models allow structured simulation to be conducted in a workshop environment where surgeons and trainees can practice to perform complex procedures in a controlled fashion under the supervision of experts. PMID:25294050

  1. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  2. Existence and stability of limit cycles in a macroscopic neuronal population model

    NASA Astrophysics Data System (ADS)

    Rodrigues, Serafim; Gonçalves, Jorge; Terry, John R.

    2007-09-01

    We present rigorous results concerning the existence and stability of limit cycles in a macroscopic model of neuronal activity. The specific model we consider is developed from the Ki set methodology, popularized by Walter Freeman. In particular we focus on a specific reduction of the KII sets, denoted RKII sets. We analyse the unfolding of supercritical Hopf bifurcations via consideration of the normal forms and centre manifold reductions. Subsequently we analyse the global stability of limit cycles on a region of parameter space and this is achieved by applying a new methodology termed Global Analysis of Piecewise Linear Systems. The analysis presented may also be used to consider coupled systems of this type. A number of macroscopic mean-field approaches to modelling human EEG may be considered as coupled RKII networks. Hence developing a theoretical understanding of the onset of oscillations in models of this type has important implications in clinical neuroscience, as limit cycle oscillations have been demonstrated to be critical in the onset of certain types of epilepsy.

  3. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  4. Using Existing Coastal Models To Address Ocean Acidification Modeling Needs: An Inside Look at Several East and Gulf Coast Regions

    NASA Astrophysics Data System (ADS)

    Jewett, E.

    2013-12-01

    Ecosystem forecast models have been in development for many US coastal regions for decades in an effort to understand how certain drivers, such as nutrients, freshwater and sediments, affect coastal water quality. These models have been used to inform coastal management interventions such as imposition of total maximum daily load allowances for nutrients or sediments to control hypoxia, harmful algal blooms and/or water clarity. Given the overlap of coastal acidification with hypoxia, it seems plausible that the geochemical models built to explain hypoxia and/or HABs might also be used, with additional terms, to understand how atmospheric CO2 is interacting with local biogeochemical processes to affect coastal waters. Examples of existing biogeochemical models from Galveston, the northern Gulf of Mexico, Tampa Bay, West Florida Shelf, Pamlico Sound, Chesapeake Bay, and Narragansett Bay will be presented and explored for suitability for ocean acidification modeling purposes.

  5. Existence, numerical convergence and evolutionary relaxation for a rate-independent phase-transformation model.

    PubMed

    Heinz, Sebastian; Mielke, Alexander

    2016-04-28

    We revisit the model for a two-well phase transformation in a linearly elastic body that was introduced and studied in Mielke et al. (2002 Arch. Ration. Mech. Anal. 162: , 137-177). This energetic rate-independent system is posed in terms of the elastic displacement and an internal variable that gives the phase portion of the second phase. We use a new approach based on mutual recovery sequences, which are adjusted to a suitable energy increment plus the associated dissipated energy and, thus, enable us to pass to the limit in the construction of energetic solutions. We give three distinct constructions of mutual recovery sequences which allow us (i) to generalize the existence result in Mielke et al. (2002), (ii) to establish the convergence of suitable numerical approximations via space-time discretization and (iii) to perform the evolutionary relaxation from the pure-state model to the relaxed-mixture model. All these results rely on weak converge and involve the H-measure as an essential tool. PMID:27002066

  6. Existence of solutions to the stommel-charney model of the gulf stream

    SciTech Connect

    Barcilon, V. ); Constantin, P. ); Titi, E.S. )

    1988-11-01

    This paper discusses the existence of weak solutions to the equations as a model of the Gulf Stream. The method of artificial viscosity is also discussed. Key words: Navier-Stokes equation, artificial viscosity, ocean circulation, DOE. The authors examine the mathematical properties of an equation arising in the theory of ocean circulation. In order to understand the role of this problem in oceanography, a brief review of the subject is given. The first successful attempt to provide a mathematical description of the mid-latitude ocean currents was made by other investigators. It was shown conclusively that a Gulf Stream-like intensification on the western side of an ocean basin could be explained by the so-called ..beta..-effect. This is the geophysical terminology for the latitudinal variation of the normal component of the earth's rotation. Aside from this variable Coriolis force, the other forces which entered into Stommel's model were those due to the pressure gradient, the surface winds, and friction. For the sake of simplicity, this last force was taken to be proportional to the velocity fields. All the effects of density stratification were neglected by making the assumption that the ocean was homogeneous. Finally, by working with vertical averages, Stommel essentially treated the ocean circulation as a two-dimensional horizontal motion. Somewhat surprisingly, Stommel's ad hoc, linear model was shown later to provide an accurate description of an actual experimental setup.

  7. Frequency domain modeling and dynamic characteristics evaluation of existing wind turbine systems

    NASA Astrophysics Data System (ADS)

    Chiang, Chih-Hung; Yu, Chih-Peng

    2016-04-01

    It is quite well accepted that frequency domain procedures are suitable for the design and dynamic analysis of wind turbine structures, especially for floating offshore wind turbines, since random wind loads and wave induced motions are most likely simulated in the frequency domain. This paper presents specific applications of an effective frequency domain scheme to the linear analysis of wind turbine structures in which a 1-D spectral element was developed based on the axially-loaded member. The solution schemes are summarized for the spectral analyses of the tower, the blades, and the combined system with selected frequency-dependent coupling effect from foundation-structure interactions. Numerical examples demonstrate that the modal frequencies obtained using spectral-element models are in good agreement with those found in the literature. A 5-element mono-pile model results in less than 0.3% deviation from an existing 160-element model. It is preliminarily concluded that the proposed scheme is relatively efficient in performing quick verification for test data obtained from the on-site vibration measurement using the microwave interferometer.

  8. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model.

    PubMed

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362

  9. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    NASA Astrophysics Data System (ADS)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".

  10. Dynamics of childhood growth and obesity development and validation of a quantitative mathematical model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative...

  11. Future observational and modelling needs identified on the basis of the existing shelf data

    NASA Astrophysics Data System (ADS)

    Berlamont, J.; Radach, G.; Becker, G.; Colijn, F.; Gekeler, J.; Laane, R. W. P. M.; Monbaliu, J.; Prandle, D.; Sündermann, J.; van Raaphorst, W.; Yu, C. S.

    1996-09-01

    NOWESP has compiled a vast quantity of existing data from the north-west European shelf. Such a focused task is without precedence. It is now highly recommended that one, or a few national and international data centres or agencies should be chosen and properly supported by the E. U., where all available observational data, incl. the NOWESP data, are collected, stored, regularly updated by the providers of the data, and made available to the researchers. International agreement must be reached on the quality control procedures and quality standards for data to be stored in these data bases. Proper arrangements should be made to preserve the economic value of the data for their “owners” without compromising use of the data by researchers or duplicating data collecting efforts. The Continental Shelf data needed are concentration fields of temperature, salinity, nutrients, suspended matter and chlorophyll, which can be called “climatological” fields. For this purpose at least one monthly survey on the whole European shelf is needed at least during five years, with a proper spatial resolution, e. g. 1‡ by 1‡, and at least in those areas where climatological data are now totally lacking. From the modelling point of view an alternative would be the availability of data from sufficiently representative fixed stations on the shelf, with weekly sampling for several years. It should be realized that there are hardly any data available on the shelf boundaries. Therefore, one should consider a European effort to set up a limited network of stations, especially at the shelf edge, where a limited, selected set of parameters is measured on a long-term basis (time series) for use in modelling and for interpreting long-term natural changes in the marine environment and changes due to human interference (eutrophication, pollutants, climatic changes, biodiversity changes). The E. U. could foster coordination of nationally organized measuring campaigns in Europe

  12. Physically based estimation of soil water retention from textural data: General framework, new models, and streamlined existing models

    USGS Publications Warehouse

    Nimmo, J.R.; Herkelrath, W.N.; Laguna, Luna A.M.

    2007-01-01

    Numerous models are in widespread use for the estimation of soil water retention from more easily measured textural data. Improved models are needed for better prediction and wider applicability. We developed a basic framework from which new and existing models can be derived to facilitate improvements. Starting from the assumption that every particle has a characteristic dimension R associated uniquely with a matric pressure ?? and that the form of the ??-R relation is the defining characteristic of each model, this framework leads to particular models by specification of geometric relationships between pores and particles. Typical assumptions are that particles are spheres, pores are cylinders with volume equal to the associated particle volume times the void ratio, and that the capillary inverse proportionality between radius and matric pressure is valid. Examples include fixed-pore-shape and fixed-pore-length models. We also developed alternative versions of the model of Arya and Paris that eliminate its interval-size dependence and other problems. The alternative models are calculable by direct application of algebraic formulas rather than manipulation of data tables and intermediate results, and they easily combine with other models (e.g., incorporating structural effects) that are formulated on a continuous basis. Additionally, we developed a family of models based on the same pore geometry as the widely used unsaturated hydraulic conductivity model of Mualem. Predictions of measurements for different suitable media show that some of the models provide consistently good results and can be chosen based on ease of calculations and other factors. ?? Soil Science Society of America. All rights reserved.

  13. Impact assessment of abiotic resources in LCA: quantitative comparison of selected characterization models.

    PubMed

    Rørbech, Jakob T; Vadenbo, Carl; Hellweg, Stefanie; Astrup, Thomas F

    2014-10-01

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247 individual market inventory data sets covering a wide range of societal activities (ecoinvent database v3.0). Log-linear regression analysis was carried out for all pairwise combinations of the 11 methods for identification of correlations in CFs (resources) and total impacts (inventory data sets) between methods. Significant differences in resource coverage were observed (9-73 resources) revealing a trade-off between resource coverage and model complexity. High correlation in CFs between methods did not necessarily manifest in high correlation in total impacts. This indicates that also resource coverage may be critical for impact assessment results. Although no consistent correlations between methods applying similar assessment models could be observed, all methods showed relatively high correlation regarding the assessment of energy resources. Finally, we classify the existing methods into three groups, according to method focus and modeling approach, to aid method selection within LCA. PMID:25208267

  14. Photon-tissue interaction model for quantitative assessment of biological tissues

    NASA Astrophysics Data System (ADS)

    Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann

    2014-02-01

    In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.

  15. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  16. A QUANTITATIVE PEDOLOGY APPROACH TO CONTINUOUS SOIL LANDSCAPE MODELS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Continuous representations of soil profiles and landscapes are needed to provide input into process based models and to move beyond the categorical paradigm of horizons and map-units. Continuous models of soil landscapes should be driven by the factors and processes of the soil genetic model. Parame...

  17. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  18. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  19. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  20. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  1. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    PubMed

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. PMID:22197801

  2. Quantitative assessment of meteorological and tropospheric Zenith Hydrostatic Delay models

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Guo, Jiming; Chen, Ming; Shi, Junbo; Zhou, Lv

    2016-09-01

    Tropospheric delay has always been an important issue in GNSS/DORIS/VLBI/InSAR processing. Most commonly used empirical models for the determination of tropospheric Zenith Hydrostatic Delay (ZHD), including three meteorological models and two empirical ZHD models, are carefully analyzed in this paper. Meteorological models refer to UNB3m, GPT2 and GPT2w, while ZHD models include Hopfield and Saastamoinen. By reference to in-situ meteorological measurements and ray-traced ZHD values of 91 globally distributed radiosonde sites, over a four-years period from 2010 to 2013, it is found that there is strong correlation between errors of model-derived values and latitudes. Specifically, the Saastamoinen model shows a systematic error of about -3 mm. Therefore a modified Saastamoinen model is developed based on the "best average" refractivity constant, and is validated by radiosonde data. Among different models, the GPT2w and the modified Saastamoinen model perform the best. ZHD values derived from their combination have a mean bias of -0.1 mm and a mean RMS of 13.9 mm. Limitations of the present models are discussed and suggestions for further improvements are given.

  3. Thermodynamic Modeling of a Solid Oxide Fuel Cell to Couple with an Existing Gas Turbine Engine Model

    NASA Technical Reports Server (NTRS)

    Brinson, Thomas E.; Kopasakis, George

    2004-01-01

    The Controls and Dynamics Technology Branch at NASA Glenn Research Center are interested in combining a solid oxide fuel cell (SOFC) to operate in conjunction with a gas turbine engine. A detailed engine model currently exists in the Matlab/Simulink environment. The idea is to incorporate a SOFC model within the turbine engine simulation and observe the hybrid system's performance. The fuel cell will be heated to its appropriate operating condition by the engine s combustor. Once the fuel cell is operating at its steady-state temperature, the gas burner will back down slowly until the engine is fully operating on the hot gases exhausted from the SOFC. The SOFC code is based on a steady-state model developed by The U.S. Department of Energy (DOE). In its current form, the DOE SOFC model exists in Microsoft Excel and uses Visual Basics to create an I-V (current-voltage) profile. For the project's application, the main issue with this model is that the gas path flow and fuel flow temperatures are used as input parameters instead of outputs. The objective is to create a SOFC model based on the DOE model that inputs the fuel cells flow rates and outputs temperature of the flow streams; therefore, creating a temperature profile as a function of fuel flow rate. This will be done by applying the First Law of Thermodynamics for a flow system to the fuel cell. Validation of this model will be done in two procedures. First, for a given flow rate the exit stream temperature will be calculated and compared to DOE SOFC temperature as a point comparison. Next, an I-V curve and temperature curve will be generated where the I-V curve will be compared with the DOE SOFC I-V curve. Matching I-V curves will suggest validation of the temperature curve because voltage is a function of temperature. Once the temperature profile is created and validated, the model will then be placed into the turbine engine simulation for system analysis.

  4. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level. PMID:15376934

  5. On the Non-Existence of Optimal Solutions and the Occurrence of "Degeneracy" in the CANDECOMP/PARAFAC Model

    ERIC Educational Resources Information Center

    Krijnen, Wim P.; Dijkstra, Theo K.; Stegeman, Alwin

    2008-01-01

    The CANDECOMP/PARAFAC (CP) model decomposes a three-way array into a prespecified number of "R" factors and a residual array by minimizing the sum of squares of the latter. It is well known that an optimal solution for CP need not exist. We show that if an optimal CP solution does not exist, then any sequence of CP factors monotonically decreasing…

  6. Global existence of the three-dimensional viscous quantum magnetohydrodynamic model

    SciTech Connect

    Yang, Jianwei; Ju, Qiangchang

    2014-08-15

    The global-in-time existence of weak solutions to the viscous quantum Magnetohydrodynamic equations in a three-dimensional torus with large data is proved. The global existence of weak solutions to the viscous quantum Magnetohydrodynamic equations is shown by using the Faedo-Galerkin method and weak compactness techniques.

  7. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  8. A Quantitative Causal Model Theory of Conditional Reasoning

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  9. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. PMID:25951756

  10. Quantitative Methods for Comparing Different Polyline Stream Network Models

    SciTech Connect

    Danny L. Anderson; Daniel P. Ames; Ping Yang

    2014-04-01

    Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.

  11. Detection of cardiomyopathy in an animal model using quantitative autoradiography

    SciTech Connect

    Kubota, K.; Som, P.; Oster, Z.H.; Brill, A.B.; Goodman, M.M.; Knapp, F.F. Jr.; Atkins, H.L.; Sole, M.J.

    1988-10-01

    A fatty acid analog (15-p-iodophenyl)-3,3 dimethyl-pentadecanoic acid (DMIPP) was studied in cardiomyopathic (CM) and normal age-matched Syrian hamsters. Dual tracer quantitative wholebody autoradiography (QARG) with DMIPP and 2-(/sup 14/C(U))-2-deoxy-2-fluoro-D-glucose (FDG) or with FDG and /sup 201/Tl enabled comparison of the uptake of a fatty acid and a glucose analog with the blood flow. These comparisons were carried out at the onset and mid-stage of the disease before congestive failure developed. Groups of CM and normal animals were treated with verapamil from the age of 26 days, before the onset of the disease for 41 days. In CM hearts, areas of decreased DMIPP uptake were seen. These areas were much larger than the decrease in uptake of FDG or /sup 201/Tl. In early CM only minimal changes in FDG or /sup 201/Tl uptake were observed as compared to controls. Treatment of CM-prone animals with verapamil prevented any changes in DMIPP, FDG, or /sup 201/Tl uptake. DMIPP seems to be a more sensitive indicator of early cardiomyopathic changes as compared to /sup 201/Tl or FDG. The trial of DMIPP and SPECT in the diagnosis of human disease, as well as for monitoring the effects of drugs which may prevent it seems to be warranted.

  12. Toward a class-independent quantitative structure--activity relationship model for uncouplers of oxidative phosphorylation.

    PubMed

    Spycher, Simon; Smejtek, Pavel; Netzeva, Tatiana I; Escher, Beate I

    2008-04-01

    A mechanistically based quantitative structure-activity relationship (QSAR) for the uncoupling activity of weak organic acids has been derived. The analysis of earlier experimental studies suggested that the limiting step in the uncoupling process is the rate with which anions can cross the membrane and that this rate is determined by the height of the energy barrier encountered in the hydrophobic membrane core. We use this mechanistic understanding to develop a predictive model for uncoupling. The translocation rate constants of anions correlate well with the free energy difference between the energy well and the energy barrier, Delta G well-barrier,A (-) , in the membrane calculated by a novel approach to describe internal partitioning in the membrane. An existing data set of 21 phenols measured in an in vitro test system specific for uncouplers was extended by 14 highly diverse compounds. A simple regression model based on the experimental membrane-water partition coefficient and Delta G well-barrier,A (-) showed good predictive power and had meaningful regression coefficients. To establish uncoupler QSARs independent of chemical class, it is necessary to calculate the descriptors for the charged species, as the analogous descriptors of the neutral species showed almost no correlation with the translocation rate constants of anions. The substitution of experimental with calculated partition coefficients resulted in a decrease of the model fit. A particular strength of the current model is the accurate calculation of excess toxicity, which makes it a suitable tool for database screening. The applicability domain, limitations of the model, and ideas for future research are critically discussed. PMID:18358007

  13. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    PubMed Central

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  14. A quantitative model of plasma in Neptune's magnetosphere

    NASA Astrophysics Data System (ADS)

    Richardson, J. D.

    1993-07-01

    A model encompassing plasma transport and energy processes is applied to Neptune's magnetosphere. Starting with profiles of the neutral densities and the electron temperature, the model calculates the plasma density and ion temperature profiles. Good agreement between model results and observations is obtained for a neutral source of 5 x 10 exp 25/s if the diffusion coefficient is 10 exp -8 L3R(N)/2s, plasma is lost at a rate 1/3 that of the strong diffusion rate, and plasma subcorotates in the region outside Triton.

  15. Efficient Recycled Algorithms for Quantitative Trait Models on Phylogenies

    PubMed Central

    Hiscott, Gordon; Fox, Colin; Parry, Matthew; Bryant, David

    2016-01-01

    We present an efficient and flexible method for computing likelihoods for phenotypic traits on a phylogeny. The method does not resort to Monte Carlo computation but instead blends Felsenstein’s discrete character pruning algorithm with methods for numerical quadrature. It is not limited to Gaussian models and adapts readily to model uncertainty in the observed trait values. We demonstrate the framework by developing efficient algorithms for likelihood calculation and ancestral state reconstruction under Wright’s threshold model, applying our methods to a data set of trait data for extrafloral nectaries across a phylogeny of 839 Fabales species. PMID:27056412

  16. Efficient Recycled Algorithms for Quantitative Trait Models on Phylogenies.

    PubMed

    Hiscott, Gordon; Fox, Colin; Parry, Matthew; Bryant, David

    2016-01-01

    We present an efficient and flexible method for computing likelihoods for phenotypic traits on a phylogeny. The method does not resort to Monte Carlo computation but instead blends Felsenstein's discrete character pruning algorithm with methods for numerical quadrature. It is not limited to Gaussian models and adapts readily to model uncertainty in the observed trait values. We demonstrate the framework by developing efficient algorithms for likelihood calculation and ancestral state reconstruction under Wright's threshold model, applying our methods to a data set of trait data for extrafloral nectaries across a phylogeny of 839 Fabales species. PMID:27056412

  17. A Quantitative Model of Honey Bee Colony Population Dynamics

    PubMed Central

    Khoury, David S.; Myerscough, Mary R.; Barron, Andrew B.

    2011-01-01

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem. PMID:21533156

  18. Quantitative comparisons of numerical models of brittle deformation

    NASA Astrophysics Data System (ADS)

    Buiter, S.

    2009-04-01

    Numerical modelling of brittle deformation in the uppermost crust can be challenging owing to the requirement of an accurate pressure calculation, the ability to achieve post-yield deformation and localisation, and the choice of rheology (plasticity law). One way to approach these issues is to conduct model comparisons that can evaluate the effects of different implementations of brittle behaviour in crustal deformation models. We present a comparison of three brittle shortening experiments for fourteen different numerical codes, which use finite element, finite difference, boundary element and distinct element techniques. Our aim is to constrain and quantify the variability among models in order to improve our understanding of causes leading to differences between model results. Our first experiment of translation of a stable sand-like wedge serves as a reference that allows for testing against analytical solutions (e.g., taper angle, root-mean-square velocity and gravitational rate of work). The next two experiments investigate an unstable wedge in a sandbox-like setup which deforms by inward translation of a mobile wall. All models accommodate shortening by in-sequence formation of forward shear zones. We analyse the location, dip angle and spacing of thrusts in detail as previous comparisons have shown that these can be highly variable in numerical and analogue models of crustal shortening and extension. We find that an accurate implementation of boundary friction is important for our models. Our results are encouraging in the overall agreement in their dynamic evolution, but show at the same time the effort that is needed to understand shear zone evolution. GeoMod2008 Team: Markus Albertz, Michele Cooke, Susan Ellis, Taras Gerya, Luke Hodkinson, Kristin Hughes, Katrin Huhn, Boris Kaus, Walter Landry, Bertrand Maillot, Christophe Pascal, Anton Popov, Guido Schreurs, Christopher Beaumont, Tony Crook, Mario Del Castello and Yves Leroy

  19. Quantitative comparisons of numerical models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne

    2010-05-01

    Numerical and laboratory models are often used to investigate the evolution of deformation processes at various scales in crust and lithosphere. In both approaches, the freedom in choice of simulation method, materials and their properties, and deformation laws could affect model outcomes. To assess the role of modelling method and to quantify the variability among models, we have performed a comparison of laboratory and numerical experiments. Here, we present results of 11 numerical codes, which use finite element, finite difference and distinct element techniques. We present three experiments that describe shortening of a sand-like, brittle wedge. The material properties of the numerical ‘sand', the model set-up and the boundary conditions are strictly prescribed and follow the analogue setup as closely as possible. Our first experiment translates a non-accreting wedge with a stable surface slope of 20 degrees. In agreement with critical wedge theory, all models maintain the same surface slope and do not deform. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge in a sandbox-like setup, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. We show that we successfully simulate sandbox-style brittle behaviour using different numerical modelling techniques and that we obtain the same styles of deformation behaviour in numerical and laboratory experiments at similar levels of variability. The GeoMod2008 Numerical Team: Markus Albertz, Michelle Cooke, Tony Crook, David Egholm, Susan Ellis, Taras Gerya, Luke Hodkinson, Boris Kaus, Walter Landry, Bertrand Maillot, Yury Mishin

  20. Quantitative structure-(chromatographic) retention relationship models for dissociating compounds.

    PubMed

    Kubik, Łukasz; Wiczling, Paweł

    2016-08-01

    The aim of this work was to develop mathematical models relating the hydrophobicity and dissociation constant of an analyte with its structure, which would be useful in predicting analyte retention times in reversed-phase liquid chromatography. For that purpose a large and diverse group of 115 drugs was used to build three QSRR models combining retention-related parameters (logkw-chromatographic measure of hydrophobicity, S-slope factor from Snyder-Soczewinski equation, and pKa) with structural descriptors calculated by means of molecular modeling for both dissociated and nondissociated forms of analytes. Lasso, Stepwise and PLS regressions were used to build statistical models. Moreover a simple QSRR equations based on lipophilicity and dissociation constant parameters calculated in the ACD/Labs software were proposed and compared with quantum chemistry-based QSRR equations. The obtained relationships were further used to predict chromatographic retention times. The predictive performances of the obtained models were assessed using 10-fold cross-validation and external validation. The QSRR equations developed were simple and were characterized by satisfactory predictive performance. Application of quantum chemistry-based and ACD-based descriptors leads to similar accuracy of retention times' prediction. PMID:26960942

  1. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology

    PubMed Central

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian

    2012-01-01

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼ 10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 106, of normal HSCs. Radiobiologic estimates favor values > 106 for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  2. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology.

    PubMed

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian; Sachs, Rainer K

    2012-05-10

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 10(6), of normal HSCs. Radiobiologic estimates favor values>10(6) for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  3. Magnetospheric mapping with a quantitative geomagnetic field model

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Mead, G. D.

    1975-01-01

    Mapping the magnetosphere on a dipole geomagnetic field model by projecting field and particle observations onto the model is described. High-latitude field lines are traced between the earth's surface and their intersection with either the equatorial plane or a cross section of the geomagnetic tail, and data from low-altitude orbiting satellites are projected along field lines to the outer magnetosphere. This procedure is analyzed, and the resultant mappings are illustrated. Extension of field lines into the geomagnetic tail and low-altitude determination of the polar cap and cusp are presented. It is noted that while there is good agreement among the various data, more particle measurements are necessary to clear up statistical uncertainties and to facilitate comparison of statistical models.

  4. Analysis of protein complexes through model-based biclustering of label-free quantitative AP-MS data.

    PubMed

    Choi, Hyungwon; Kim, Sinae; Gingras, Anne-Claude; Nesvizhskii, Alexey I

    2010-06-22

    Affinity purification followed by mass spectrometry (AP-MS) has become a common approach for identifying protein-protein interactions (PPIs) and complexes. However, data analysis and visualization often rely on generic approaches that do not take advantage of the quantitative nature of AP-MS. We present a novel computational method, nested clustering, for biclustering of label-free quantitative AP-MS data. Our approach forms bait clusters based on the similarity of quantitative interaction profiles and identifies submatrices of prey proteins showing consistent quantitative association within bait clusters. In doing so, nested clustering effectively addresses the problem of overrepresentation of interactions involving baits proteins as compared with proteins only identified as preys. The method does not require specification of the number of bait clusters, which is an advantage against existing model-based clustering methods. We illustrate the performance of the algorithm using two published intermediate scale human PPI data sets, which are representative of the AP-MS data generated from mammalian cells. We also discuss general challenges of analyzing and interpreting clustering results in the context of AP-MS data. PMID:20571534

  5. Analysis of protein complexes through model-based biclustering of label-free quantitative AP-MS data

    PubMed Central

    Choi, Hyungwon; Kim, Sinae; Gingras, Anne-Claude; Nesvizhskii, Alexey I

    2010-01-01

    Affinity purification followed by mass spectrometry (AP-MS) has become a common approach for identifying protein–protein interactions (PPIs) and complexes. However, data analysis and visualization often rely on generic approaches that do not take advantage of the quantitative nature of AP-MS. We present a novel computational method, nested clustering, for biclustering of label-free quantitative AP-MS data. Our approach forms bait clusters based on the similarity of quantitative interaction profiles and identifies submatrices of prey proteins showing consistent quantitative association within bait clusters. In doing so, nested clustering effectively addresses the problem of overrepresentation of interactions involving baits proteins as compared with proteins only identified as preys. The method does not require specification of the number of bait clusters, which is an advantage against existing model-based clustering methods. We illustrate the performance of the algorithm using two published intermediate scale human PPI data sets, which are representative of the AP-MS data generated from mammalian cells. We also discuss general challenges of analyzing and interpreting clustering results in the context of AP-MS data. PMID:20571534

  6. Quantitative experimental modelling of fragmentation during explosive volcanism

    NASA Astrophysics Data System (ADS)

    Thordén Haug, Ø.; Galland, O.; Gisler, G.

    2012-04-01

    Phreatomagmatic eruptions results from the violent interaction between magma and an external source of water, such as ground water or a lake. This interaction causes fragmentation of the magma and/or the host rock, resulting in coarse-grained (lapilli) to very fine-grained (ash) material. The products of phreatomagmatic explosions are classically described by their fragment size distribution, which commonly follows power laws of exponent D. Such descriptive approach, however, considers the final products only and do not provide information on the dynamics of fragmentation. The aim of this contribution is thus to address the following fundamental questions. What are the physics that govern fragmentation processes? How fragmentation occurs through time? What are the mechanisms that produce power law fragment size distributions? And what are the scaling laws that control the exponent D? To address these questions, we performed a quantitative experimental study. The setup consists of a Hele-Shaw cell filled with a layer of cohesive silica flour, at the base of which a pulse of pressurized air is injected, leading to fragmentation of the layer of flour. The fragmentation process is monitored through time using a high-speed camera. By varying systematically the air pressure (P) and the thickness of the flour layer (h) we observed two morphologies of fragmentation: "lift off" where the silica flour above the injection inlet is ejected upwards, and "channeling" where the air pierces through the layer along sub-vertical conduit. By building a phase diagram, we show that the morphology is controlled by P/dgh, where d is the density of the flour and g is the gravitational acceleration. To quantify the fragmentation process, we developed a Matlab image analysis program, which calculates the number and sizes of the fragments, and so the fragment size distribution, during the experiments. The fragment size distributions are in general described by power law distributions of

  7. A quantitative risk model for early lifecycle decision making

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  8. Quantitative modeling and analysis in environmental studies. Technical report

    SciTech Connect

    Gaver, D.P.

    1994-10-01

    This paper reviews some of the many mathematical modeling and statistical data analysis problems that arise in environmental studies. It makes no claim to be comprehensive nor truly up-to-date. It will appear as a chapter in a book on ecotoxicology to be published by CRC Press, probably in 1995. Workshops leading to the book creation were sponsored by The Conte Foundation.

  9. Unified quantitative model of AMPA receptor trafficking at synapses

    PubMed Central

    Czöndör, Katalin; Mondin, Magali; Garcia, Mikael; Heine, Martin; Frischknecht, Renato; Choquet, Daniel; Sibarita, Jean-Baptiste; Thoumine, Olivier R.

    2012-01-01

    Trafficking of AMPA receptors (AMPARs) plays a key role in synaptic transmission. However, a general framework integrating the two major mechanisms regulating AMPAR delivery at postsynapses (i.e., surface diffusion and internal recycling) is lacking. To this aim, we built a model based on numerical trajectories of individual AMPARs, including free diffusion in the extrasynaptic space, confinement in the synapse, and trapping at the postsynaptic density (PSD) through reversible interactions with scaffold proteins. The AMPAR/scaffold kinetic rates were adjusted by comparing computer simulations to single-particle tracking and fluorescence recovery after photobleaching experiments in primary neurons, in different conditions of synapse density and maturation. The model predicts that the steady-state AMPAR number at synapses is bidirectionally controlled by AMPAR/scaffold binding affinity and PSD size. To reveal the impact of recycling processes in basal conditions and upon synaptic potentiation or depression, spatially and temporally defined exocytic and endocytic events were introduced. The model predicts that local recycling of AMPARs close to the PSD, coupled to short-range surface diffusion, provides rapid control of AMPAR number at synapses. In contrast, because of long-range diffusion limitations, extrasynaptic recycling is intrinsically slower and less synapse-specific. Thus, by discriminating the relative contributions of AMPAR diffusion, trapping, and recycling events on spatial and temporal bases, this model provides unique insights on the dynamic regulation of synaptic strength. PMID:22331885

  10. Comprehensive Quantitative Model of Inner-Magnetosphere Dynamics

    NASA Technical Reports Server (NTRS)

    Wolf, Richard A.

    2002-01-01

    This report includes descriptions of papers, a thesis, and works still in progress which cover observations of space weather in the Earth's magnetosphere. The topics discussed include: 1) modelling of magnetosphere activity; 2) magnetic storms; 3) high energy electrons; and 4) plasmas.

  11. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    ERIC Educational Resources Information Center

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  12. Quantitative models of magnetic and electric fields in the magnetosphere

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1975-01-01

    In order to represent the magnetic field B in the magnetosphere various auxiliary functions can be used: the current density, the scalar potential, toroidal and poloidal potentials, and Euler potentials -- or else, the components of B may be expanded directly. The most versatile among the linear representations is the one based on toroidal and poloidal potentials; it has seen relatively little use in the past but appears to be the most promising one for future work. Other classifications of models include simple testbed models vs. comprehensive ones and analytical vs. numerical representations. The electric field E in the magnetosphere is generally assumed to vary only slowly and to be orthogonal to B, allowing the use of a scalar potential which may be deduced from observations in the ionosphere, from the shape of the plasmapause, or from particle observations in synchronous orbits.

  13. Afference copy as a quantitative neurophysiological model for consciousness.

    PubMed

    Cornelis, Hugo; Coop, Allan D

    2014-06-01

    Consciousness is a topic of considerable human curiosity with a long history of philosophical analysis and debate. We consider there is nothing particularly complicated about consciousness when viewed as a necessary process of the vertebrate nervous system. Here, we propose a physiological "explanatory gap" is created during each present moment by the temporal requirements of neuronal activity. The gap extends from the time exteroceptive and proprioceptive stimuli activate the nervous system until they emerge into consciousness. During this "moment", it is impossible for an organism to have any conscious knowledge of the ongoing evolution of its environment. In our schematic model, a mechanism of "afference copy" is employed to bridge the explanatory gap with consciously experienced percepts. These percepts are fabricated from the conjunction of the cumulative memory of previous relevant experience and the given stimuli. They are structured to provide the best possible prediction of the expected content of subjective conscious experience likely to occur during the period of the gap. The model is based on the proposition that the neural circuitry necessary to support consciousness is a product of sub/preconscious reflexive learning and recall processes. Based on a review of various psychological and neurophysiological findings, we develop a framework which contextualizes the model and briefly discuss further implications. PMID:25012715

  14. Quantitative Risk Modeling of Fire on the International Space Station

    NASA Technical Reports Server (NTRS)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  15. Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides

    PubMed Central

    Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  16. A quantitative assessment of torque-transducer models for magnetoreception

    PubMed Central

    Winklhofer, Michael; Kirschvink, Joseph L.

    2010-01-01

    Although ferrimagnetic material appears suitable as a basis of magnetic field perception in animals, it is not known by which mechanism magnetic particles may transduce the magnetic field into a nerve signal. Provided that magnetic particles have remanence or anisotropic magnetic susceptibility, an external magnetic field will exert a torque and may physically twist them. Several models of such biological magnetic-torque transducers on the basis of magnetite have been proposed in the literature. We analyse from first principles the conditions under which they are viable. Models based on biogenic single-domain magnetite prove both effective and efficient, irrespective of whether the magnetic structure is coupled to mechanosensitive ion channels or to an indirect transduction pathway that exploits the strayfield produced by the magnetic structure at different field orientations. On the other hand, torque-detector models that are based on magnetic multi-domain particles in the vestibular organs turn out to be ineffective. Also, we provide a generic classification scheme of torque transducers in terms of axial or polar output, within which we discuss the results from behavioural experiments conducted under altered field conditions or with pulsed fields. We find that the common assertion that a magnetoreceptor based on single-domain magnetite could not form the basis for an inclination compass does not always hold. PMID:20086054

  17. Quantitative empirical model of the magnetospheric flux-transfer process

    SciTech Connect

    Holzer, R.E.; McPherron, R.L.; Hardy, D.A.

    1986-03-01

    A simple model for estimating the open flux in the polar cap was based on precipitating electron data from polar orbiting satellites. This model was applied in the growth phase of two substorms on March 27, 1979, to determine the fraction of the flux of the southward IMF which merged at the forward magnetopause, contributing to the polar cap flux. The effective merging efficiency at the forward magnetopause was found to be 0.19 + or - 0.03 under average solar wind conditions. The westward electrojet current during the expansion and recovery phases of the same substorms was approximately proportional to the time rate of decrease of polar flux due to merging in the tail. An empirical model for calculating polar-cap flux changes using the merging at the forward magnetopause for estimating increases and the westward electrojet for decreases was compared with observed changes in the polar-cap flux. Agreement between the predicted and observed changes in the polar-cap flux was tested over an interval of 8 hours. The advantages and limitations of the method are discussed.

  18. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    PubMed

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  19. Quantitative description of realistic wealth distributions by kinetic trading models.

    PubMed

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x(-)(1+alpha) at the high end, where, in general, alpha is greater than 1 (Pareto's law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets. PMID:18999570

  20. Quantitative description of realistic wealth distributions by kinetic trading models

    NASA Astrophysics Data System (ADS)

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x-(1+α) at the high end, where, in general, α is greater than 1 (Pareto’s law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets.

  1. Concentric Coplanar Capacitive Sensor System with Quantitative Model

    NASA Technical Reports Server (NTRS)

    Bowler, Nicola (Inventor); Chen, Tianming (Inventor)

    2014-01-01

    A concentric coplanar capacitive sensor includes a charged central disc forming a first electrode, an outer annular ring coplanar with and outer to the charged central disc, the outer annular ring forming a second electrode, and a gap between the charged central disc and the outer annular ring. The first electrode and the second electrode may be attached to an insulative film. A method provides for determining transcapacitance between the first electrode and the second electrode and using the transcapacitance in a model that accounts for a dielectric test piece to determine inversely the properties of the dielectric test piece.

  2. Existence, uniqueness and stability of positive periodic solution for a nonlinear prey-competition model with delays

    NASA Astrophysics Data System (ADS)

    Chen, Fengde; Xie, Xiangdong; Shi, Jinlin

    2006-10-01

    A nonlinear periodic predator-prey model with m-preys and (n-m)-predators and delays is proposed in this paper, which can be seen as the modification of the traditional Lotka-Volterra prey-competition model. Sufficient conditions which guarantee the existence of a unique globally attractive positive periodic solution of the system are obtained.

  3. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    ERIC Educational Resources Information Center

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  4. [Establishment of simultaneous quantitative model of five alkaloids from Corydalis Rhizoma by near-infrared spectrometry].

    PubMed

    Yang, Li-xin; Zhang, Yong-xin; Feng, Wei-hong; Li, Chun

    2015-10-01

    This paper established a near-infrared spectroscopy quantitative model for simultaneous quantitative analysis of coptisine hydrochloride, dehydrocorydaline, tetrahydropalmatine, corydaline and glaucine in Corydalis Rhizoma. Firstly, the chemical values of the five components in Corydalis Rhizoma were determined by the reversed-phase high performance liquid chromatography (RP-HPLC) with UV detection. Then, the quantitative calibration model was established and optimized by fourier transformation near-infrared spectroscopy (NIRS) combined with partial least square (PLS) regression. The calibration model was evaluated by correlation coefficient (r), the root-mean-square error of calibration (RMSEC) and the root mean square of cross-validation (RMSECV) of the calibration model, as well as the correlation coefficient (r) and the root mean square of prediction (RMSEP) of prediction model. For the quantitative calibration model, the r, RMSEC and RMSECV of coptisine hydrochloride, dehydrocorydaline, tetrahydropalmatine, corydaline and glaucine were 0.941 0, 0.972 7, 0.964 3, 0.978 1, 0.979 9; 0.006 7, 0.003 5, 0.005 9, 0.002 8, 0.005 9; and 0.015, 0.011, 0.020, 0.010 and 0.022, respectively. For the prediction model, the r and RMSEP of the five components were 0.916 6, 0.942 9, 0.943 6, 0.916 7, 0.914 5; and 0.009, 0.006 6, 0.007 5, 0.006 9 and 0.011, respectively. The established near-infrared spectroscopy quantitative model is relatively stable, accurate and reliable for the simultaneous quantitative analysis of the five alkaloids, and is expected to be used for the rapid determination of the five components in crude drug of Corydalis Rhizoma. PMID:26975110

  5. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the exemplar…

  6. A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction

    ERIC Educational Resources Information Center

    Cohen, Anat; Nachmias, Rafi

    2006-01-01

    This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…

  7. Global existence of solutions and uniform persistence of a diffusive predator-prey model with prey-taxis

    NASA Astrophysics Data System (ADS)

    Wu, Sainan; Shi, Junping; Wu, Boying

    2016-04-01

    This paper proves the global existence and boundedness of solutions to a general reaction-diffusion predator-prey system with prey-taxis defined on a smooth bounded domain with no-flux boundary condition. The result holds for domains in arbitrary spatial dimension and small prey-taxis sensitivity coefficient. This paper also proves the existence of a global attractor and the uniform persistence of the system under some additional conditions. Applications to models from ecology and chemotaxis are discussed.

  8. Canalization, genetic assimilation and preadaptation. A quantitative genetic model.

    PubMed Central

    Eshel, I; Matessi, C

    1998-01-01

    We propose a mathematical model to analyze the evolution of canalization for a trait under stabilizing selection, where each individual in the population is randomly exposed to different environmental conditions, independently of its genotype. Without canalization, our trait (primary phenotype) is affected by both genetic variation and environmental perturbations (morphogenic environment). Selection of the trait depends on individually varying environmental conditions (selecting environment). Assuming no plasticity initially, morphogenic effects are not correlated with the direction of selection in individual environments. Under quite plausible assumptions we show that natural selection favors a system of canalization that tends to repress deviations from the phenotype that is optimal in the most common selecting environment. However, many experimental results, dating back to Waddington and others, indicate that natural canalization systems may fail under extreme environments. While this can be explained as an impossibility of the system to cope with extreme morphogenic pressure, we show that a canalization system that tends to be inactivated in extreme environments is even more advantageous than rigid canalization. Moreover, once this adaptive canalization is established, the resulting evolution of primary phenotype enables substantial preadaptation to permanent environmental changes resembling extreme niches of the previous environment. PMID:9691063

  9. A quantitative confidence signal detection model: 1. Fitting psychometric functions.

    PubMed

    Yi, Yongwoo; Merfeld, Daniel M

    2016-04-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. PMID:26763777

  10. Quantitative nonlinearity analysis of model-scale jet noise

    NASA Astrophysics Data System (ADS)

    Miller, Kyle G.; Reichman, Brent O.; Gee, Kent L.; Neilsen, Tracianne B.; Atchley, Anthony A.

    2015-10-01

    The effects of nonlinearity on the power spectrum of jet noise can be directly compared with those of atmospheric absorption and geometric spreading through an ensemble-averaged, frequency-domain version of the generalized Burgers equation (GBE) [B. O. Reichman et al., J. Acoust. Soc. Am. 136, 2102 (2014)]. The rate of change in the sound pressure level due to the nonlinearity, in decibels per jet nozzle diameter, is calculated using a dimensionless form of the quadspectrum of the pressure and the squared-pressure waveforms. In this paper, this formulation is applied to atmospheric propagation of a spherically spreading, initial sinusoid and unheated model-scale supersonic (Mach 2.0) jet data. The rate of change in level due to nonlinearity is calculated and compared with estimated effects due to absorption and geometric spreading. Comparing these losses with the change predicted due to nonlinearity shows that absorption and nonlinearity are of similar magnitude in the geometric far field, where shocks are present, which causes the high-frequency spectral shape to remain unchanged.

  11. A quantitative model of the biogeochemical transport of iodine

    NASA Astrophysics Data System (ADS)

    Weng, H.; Ji, Z.; Weng, J.

    2010-12-01

    Iodine deficiency disorders (IDD) are among the world’s most prevalent public health problems yet preventable by dietary iodine supplements. To better understand the biogeochemical behavior of iodine and to explore safer and more efficient ways of iodine supplementation as alternatives to iodized salt, we studied the behavior of iodine as it is absorbed, accumulated and released by plants. Using Chinese cabbage as a model system and the 125I tracing technique, we established that plants uptake exogenous iodine from soil, most of which are transported to the stem and leaf tissue. The level of absorption of iodine by plants is dependent on the iodine concentration in soil, as well as the soil types that have different iodine-adsorption capacity. The leaching experiment showed that the remainder soil content of iodine after leaching is determined by the iodine-adsorption ability of the soil and the pH of the leaching solution, but not the volume of leaching solution. Iodine in soil and plants can also be released to the air via vaporization in a concentration-dependent manner. This study provides a scientific basis for developing new methods to prevent IDD through iodized vegetable production.

  12. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    PubMed

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34

  13. Existence and analyticity of eigenvalues of a two-channel molecular resonance model

    NASA Astrophysics Data System (ADS)

    Lakaev, S. N.; Latipov, Sh. M.

    2011-12-01

    We consider a family of operators Hγμ(k), k ∈ mathbb{T}^d := (-π,π]d, associated with the Hamiltonian of a system consisting of at most two particles on a d-dimensional lattice ℤd, interacting via both a pair contact potential (μ > 0) and creation and annihilation operators (γ > 0). We prove the existence of a unique eigenvalue of Hγμ(k), k ∈ mathbb{T}^d , or its absence depending on both the interaction parameters γ,μ ≥ 0 and the system quasimomentum k ∈ mathbb{T}^d . We show that the corresponding eigenvector is analytic. We establish that the eigenvalue and eigenvector are analytic functions of the quasimomentum k ∈ mathbb{T}^d in the existence domain G ⊂ mathbb{T}^d.

  14. Existence and large time behavior for a stochastic model of modified magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Razafimandimby, Paul André; Sango, Mamadou

    2015-10-01

    In this paper, we study a system of nonlinear stochastic partial differential equations describing the motion of turbulent non-Newtonian media in the presence of fluctuating magnetic field. The system is basically obtained by a coupling of the dynamical equations of a non-Newtonian fluids having p-structure and the Maxwell equations. We mainly show the existence of weak martingale solutions and their exponential decay when time goes to infinity.

  15. Using Item-Type Performance Covariance to Improve the Skill Model of an Existing Tutor

    ERIC Educational Resources Information Center

    Pavlik, Philip I., Jr.; Cen, Hao; Wu, Lili; Koedinger, Kenneth R.

    2008-01-01

    Using data from an existing pre-algebra computer-based tutor, we analyzed the covariance of item-types with the goal of describing a more effective way to assign skill labels to item-types. Analyzing covariance is important because it allows us to place the skills in a related network in which we can identify the role each skill plays in learning…

  16. Existence Theorems for Vortices in the Aharony-Bergman-Jaferis-Maldacena Model

    NASA Astrophysics Data System (ADS)

    Han, Xiaosen; Yang, Yisong

    2015-01-01

    A series of sharp existence and uniqueness theorems are established for the multiple vortex solutions in the supersymmetric Chern-Simons-Higgs theory formalism of Aharony, Bergman, Jaferis, and Maldacena, for which the Higgs bosons and Dirac fermions lie in the bifundamental representation of the general gauge symmetry group . The governing equations are of the BPS type and derived by Kim, Kim, Kwon, and Nakajima in the mass-deformed framework labeled by a continuous parameter.

  17. Quantitative analysis of free and bonded forms of volatile sulfur compouds in wine. Basic methodologies and evidences showing the existence of reversible cation-complexed forms.

    PubMed

    Franco-Luesma, Ernesto; Ferreira, Vicente

    2014-09-12

    This paper examines first some basic aspects critical to the analysis of Volatile Sulfur Compounds (VSCs), such as the analytical characteristics of the GC-pFPD system and the stability of the different standard solutions required for a proper calibration. Following, a direct static headspace analytical method for the determination of exclusively free forms of VSCs has been developed. Method repeatability is better than 4%, detection limits for main analytes are below 0.5μgL(-1), and the method dynamic linear range (r(2)>0.99) is expanded by controlling the split ratio in the chromatographic inlet to cover the natural range of occurrence of these compounds in wines. The method gives reliable estimates of headspace concentrations but, as expected, suffers from strong matrix effects with recoveries ranging from 0 to 100% or from 60 to 100 in the cases of H2S and the other mercaptans, respectively. This demonstrates the existence of strong interactions of these compounds with different matrix components. The complexing ability of Cu(2+) and to a lower extent Fe(2+) and Zn(2+) has been experimentally checked. A previously developed method in which the wine is strongly diluted with brine and the volatiles are preconcentrated by HS-SPME, was found to give a reliable estimation of the total amount (free+complexed) of mercaptans, demonstrating that metal-mercaptan complexes are reversible. The comparative analysis of different wines by the two procedures reveals that in normal wines H2S and methanethiol can be complexed at levels above 99%, with averages around 97% for H2S and 75% for methanethiol, while thioethers such as dimethyl sulfide (DMS) are not complexed. Overall, the proposed strategy may be generalized to understand problems caused by VSCs in different matrices. PMID:25064535

  18. Structural and Stratigraphic Evolution of the Iberia and Newfoundland Rifted Margins: A Quantitative Modeling Approach

    NASA Astrophysics Data System (ADS)

    Mohn, G.; Karner, G. D.; Manatschal, G.; Johnson, C. A.

    2014-12-01

    Rifted margins develop generally through polyphased extensional events leading eventually to break-up. We investigate the spatial and temporal evolution of the Iberia-Newfoundland rifted margin from its Permian post-orogenic stage to early Cretaceous break-up. We have applied Quantitative Basin Analysis to integrate seismic stratigraphic interpretations and drill hole data of representative sections across the Iberia-Newfoundland margins with kinematic models for the thinning of the lithosphere and subsequent isostatic readjustment. Our goal is to predict the distribution of extension and thinning, environments of deposition, crustal structure and subsidence history as functions of space and time. The first sediments deposited on the Iberian continental crust were in response to Permian lithospheric thinning, associated with magmatic underplating and subsequent thermal re-equilibration of the lithosphere. During late Triassic-early Jurassic rifting, a broadly distributed depth-independent lithospheric extension occurred, followed by late Jurassic rifting that increasingly focused with time and became depth-dependent during the early Cretaceous. However, there exists a temporality in the along-strike deformation of the Iberia-Newfoundland margin: significant Valanginian-Hauterivian deformation characterizes the northern Galicia Bank-Flemish Cap while the southern Iberian-Newfoundland region is characterized by Tithonian-early Berriasian extension. Deformation localized with time on both margins leading to late Aptian break-up. To match the distribution and magnitude of subsidence across the profiles requires significant thinning of middle/lower crustal level and subcontinental lithospheric mantle, leading to the formation of the hyper-extended domains. The late-stage deformation of both margins was characterized by a predominantly brittle deformation of the residual continental crust, leading to exhumation of subcontinental mantle and ultimately to seafloor

  19. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study.

    PubMed

    Borji, Ali; Sihite, Dicky N; Itti, Laurent

    2013-01-01

    Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as "visual saliency." Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains. PMID:22868572

  20. Had the Planet Mars Not Existed: Kepler's Equant Model and Its Physical Consequences

    ERIC Educational Resources Information Center

    Bracco, C.; Provost, J.P.

    2009-01-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal…

  1. Adapting Existing Spatial Data Sets to New Uses: An Example from Energy Modeling

    SciTech Connect

    Johanesson, G; Stewart, J S; Barr, C; Sabeff, L B; George, R; Heimiller, D; Milbrandt, A

    2006-06-23

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, and economic projections. These data are available at various spatial and temporal scales, which may be different from those needed by the energy modeling community. If the translation from the original format to the format required by the energy researcher is incorrect, then resulting models can produce misleading conclusions. This is of increasing importance, because of the fine resolution data required by models for new alternative energy sources such as wind and distributed generation. This paper addresses the matter by applying spatial statistical techniques which improve the usefulness of spatial data sets (maps) that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) imputing missing data and (3) merging spatial data sets.

  2. The Power of a Good Idea: Quantitative Modeling of the Spread of Ideas from Epidemiological Models

    SciTech Connect

    Bettencourt, L. M. A.; Cintron-Arias, A.; Kaiser, D. I.; Castillo-Chavez, C.

    2005-05-05

    The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.

  3. Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models

    NASA Astrophysics Data System (ADS)

    de Diesbach, Pablo Brice

    This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.

  4. Can existing climate models be used to study anthropogenic changes in tropical cyclone climate

    SciTech Connect

    Broccoli, A.J.; Manabe, S.

    1990-10-01

    The utility of current generation climate models for studying the influence of greenhouse warming on the tropical storm climatology is examined. A method developed to identify tropical cyclones is applied to a series of model integrations. The global distribution of tropical storms is simulated by these models in a generally realistic manner. While the model resolution is insufficient to reproduce the fine structure of tropical cyclones, the simulated storms become more realistic as resolution is increased. To obtain a preliminary estimate of the response of the tropical cyclone climatology, CO{sub 2} was doubled using models with varying cloud treatments and different horizontal resolutions. In the experiment with prescribed cloudiness, the number of storm-days, a combined measure of the number and duration of tropical storms, undergoes a statistically significant reduction of the number of storm-days is indicated in the experiment with cloud feedback. In both cases the response is independent of horizontal resolution. While the inconclusive nature of these experimental results highlights the uncertainties that remain in examining the details of greenhouse-gas induced climate change, the ability of the models to qualitatively simulate the tropical storm climatology suggests that they are appropriate tools for this problem.

  5. Global existence analysis for degenerate energy-transport models for semiconductors

    NASA Astrophysics Data System (ADS)

    Zamponi, Nicola; Jüngel, Ansgar

    2015-04-01

    A class of energy-transport equations without electric field under mixed Dirichlet-Neumann boundary conditions is analyzed. The system of degenerate and strongly coupled parabolic equations for the particle density and temperature arises in semiconductor device theory. The global-in-time existence of weak nonnegative solutions is shown. The proof consists of a variable transformation and a semi-discretization in time such that the discretized system becomes elliptic and semilinear. Positive approximate solutions are obtained by Stampacchia truncation arguments and a new cut-off test function. Nonlogarithmic entropy inequalities yield gradient estimates which allow for the limit of vanishing time step sizes. Exploiting the entropy inequality, the long-time convergence of the weak solutions to the constant steady state is proved. Because of the lack of appropriate convex Sobolev inequalities to estimate the entropy dissipation, only an algebraic decay rate is obtained. Numerical experiments indicate that the decay rate is typically exponential.

  6. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    NASA Astrophysics Data System (ADS)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2016-07-01

    coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.

  7. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    NASA Astrophysics Data System (ADS)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2015-09-01

    also provided insights into the EDII, in particular highlighting drought events where missing impact reports reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis, and highlights the important role that quantitative analysis with impacts data can have in providing "ground truth" for drought indicators alongside more traditional stakeholder-led approaches.

  8. Building Coalitions To Provide HIV Legal Advocacy Services: Utilizing Existing Disability Models. AIDS Technical Report, No. 5.

    ERIC Educational Resources Information Center

    Harvey, David C.; Ardinger, Robert S.

    This technical report is part of a series on AIDS/HIV (Acquired Immune Deficiency Syndrome/Human Immunodeficiency Virus) and is intended to help link various legal advocacy organizations providing services to persons with mental illness or developmental disabilities. This report discusses strategies to utilize existing disability models for…

  9. Utilization of data estimation via existing models, within a tiered data quality system, for populating species sensitivity distributions

    EPA Science Inventory

    The acquisition toxicity test data of sufficient quality from open literature to fulfill taxonomic diversity requirements can be a limiting factor in the creation of new 304(a) Aquatic Life Criteria. The use of existing models (WebICE and ACE) that estimate acute and chronic eff...

  10. Developmental modeling effects on the quantitative and qualitative aspects of motor performance.

    PubMed

    McCullagh, P; Stiehl, J; Weiss, M R

    1990-12-01

    The purpose of the present experiment was to replicate and extend previous developmental modeling research by examining the qualitative as well as quantitative aspects of motor performance. Eighty females of two age groups (5-0 to 6-6 and 7-6 to 9-0 years) were randomly assigned to conditions within a 2 x 2 x 2 (Age x Model Type x Rehearsal) factorial design. Children received either verbal instructions only (no model) or a visual demonstration with experimenter-given verbal cues (verbal model) of a five-part dance skill sequence. Children were either prompted to verbally rehearse before skill execution or merely asked to reproduce the sequence without prompting. Both quantitative (order) and qualitative (form) performances were assessed. Results revealed a significant age main effect for both order and form performance, with older children performing better than younger children. A model type main effect was also found for both order and form performance. The verbal model condition produced better qualitative performance, whereas the no model condition resulted in better quantitative scores. These results are discussed in terms of differential coding strategies that may influence task components in modeling. PMID:2132893

  11. Criteria for determining whether mismatch responses exist in animal models: Focus on rodents.

    PubMed

    Harms, Lauren; Michie, Patricia T; Näätänen, Risto

    2016-04-01

    The mismatch negativity (MMN) component of the auditory event-related potential, elicited in response to unexpected stimuli in the auditory environment, has great value for cognitive neuroscience research. It is changed in several neuropsychiatric disorders such as schizophrenia. The ability to measure and manipulate MMN-like responses in animal models, particularly rodents, would provide an enormous opportunity to learn more about the neurobiology underlying MMN. However, the MMN in humans is a very specific phenomenon: how do we decide which features we should focus on emulating in an animal model to achieve the highest level of translational validity? Here we discuss some of the key features of MMN in humans and summarise the success with which they have been translated into rodent models. Many studies from several different labs have successfully shown that the rat brain is capable of generating deviance detection responses that satisfy of the criteria for the human MMN. PMID:26196895

  12. A novel adipose-specific gene deletion model demonstrates potential pitfalls of existing methods.

    PubMed

    Mullican, Shannon E; Tomaru, Takuya; Gaddis, Christine A; Peed, Lindsey C; Sundaram, Anand; Lazar, Mitchell A

    2013-01-01

    Adipose-specific gene deletion in mice is crucial in determining gene function in adipocyte homeostasis and the development of obesity. We noted 100% mortality when the Hdac3 gene was conditionally deleted using Fabp4-Cre mice, the most commonly used model of adipose-targeted Cre recombinase. However, this surprising result was not reproduced using other models of adipose targeting of Cre, including a novel Retn-Cre mouse. These findings underscore the need for caution when interpreting data obtained using Fabp4-Cre mice and should encourage the use of additional or alternative adipose-targeting Cre mouse models before drawing conclusions about in vivo adipocyte-specific functions. PMID:23192980

  13. Had the planet Mars not existed: Kepler's equant model and its physical consequences

    NASA Astrophysics Data System (ADS)

    Bracco, C.; Provost, J.-P.

    2009-09-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal acceleration with an r-2 dependence on the distance to the Sun. If this dependence is assumed to be universal, Kepler's third law follows immediately. This elementary exercise in kinematics for undergraduates emphasizes the proximity of the equant model coming from ancient Greece with our present knowledge. It adds to its historical interest a didactical relevance concerning, in particular, the discussion of the Aristotelian or Newtonian conception of motion.

  14. Benthic-Pelagic Coupling in Biogeochemical and Climate Models: Existing Approaches, Recent developments and Roadblocks

    NASA Astrophysics Data System (ADS)

    Arndt, Sandra

    2016-04-01

    Marine sediments are key components in the Earth System. They host the largest carbon reservoir on Earth, provide the only long term sink for atmospheric CO2, recycle nutrients and represent the most important climate archive. Biogeochemical processes in marine sediments are thus essential for our understanding of the global biogeochemical cycles and climate. They are first and foremost, donor controlled and, thus, driven by the rain of particulate material from the euphotic zone and influenced by the overlying bottom water. Geochemical species may undergo several recycling loops (e.g. authigenic mineral precipitation/dissolution) before they are either buried or diffuse back to the water column. The tightly coupled and complex pelagic and benthic process interplay thus delays recycling flux, significantly modifies the depositional signal and controls the long-term removal of carbon from the ocean-atmosphere system. Despite the importance of this mutual interaction, coupled regional/global biogeochemical models and (paleo)climate models, which are designed to assess and quantify the transformations and fluxes of carbon and nutrients and evaluate their response to past and future perturbations of the climate system either completely neglect marine sediments or incorporate a highly simplified representation of benthic processes. On the other end of the spectrum, coupled, multi-component state-of-the-art early diagenetic models have been successfully developed and applied over the past decades to reproduce observations and quantify sediment-water exchange fluxes, but cannot easily be coupled to pelagic models. The primary constraint here is the high computation cost of simulating all of the essential redox and equilibrium reactions within marine sediments that control carbon burial and benthic recycling fluxes: a barrier that is easily exacerbated if a variety of benthic environments are to be spatially resolved. This presentation provides an integrative overview of

  15. The Existence and Stability Analysis of the Equilibria in Dengue Disease Infection Model

    NASA Astrophysics Data System (ADS)

    Anggriani, N.; Supriatna, A. K.; Soewono, E.

    2015-06-01

    In this paper we formulate an SIR (Susceptible - Infective - Recovered) model of Dengue fever transmission with constant recruitment. We found a threshold parameter K0, known as the Basic Reproduction Number (BRN). This model has two equilibria, disease-free equilibrium and endemic equilibrium. By constructing suitable Lyapunov function, we show that the disease- free equilibrium is globally asymptotic stable whenever BRN is less than one and when it is greater than one, the endemic equilibrium is globally asymptotic stable. Numerical result shows the dynamic of each compartment together with effect of multiple bio-agent intervention as a control to the dengue transmission.

  16. Existing Whole-House Solutions Case Study: Community-Scale Energy Modeling - Southeastern United States

    SciTech Connect

    2014-12-01

    Community-scale energy modeling and testing are useful for determining energy conservation measures that will effectively reduce energy use. To that end, IBACOS analyzed pre-retrofit daily utility data to sort homes by energy consumption, allowing for better targeting of homes for physical audits. Following ASHRAE Guideline 14 normalization procedures, electricity consumption of 1,166 all-electric, production-built homes was modeled. The homes were in two communities: one built in the 1970s and the other in the mid-2000s.

  17. Quantitative Model of Systemic Toxicity Using ToxCast and ToxRefDB (SOT)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  18. Framework for a Quantitative Systemic Toxicity Model (FutureToxII)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  19. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  20. Comparison of Existing Responsiveness-to-Intervention Models to Identify and Answer Implementation Questions

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Ysseldyke, James E.

    2005-01-01

    Responsiveness-to-intervention (RTI) is the front-running candidate to replace current practice in diagnosing learning disabilities, but researchers have identified several questions about implementation. Specific questions include: Are there validated intervention models? Are there adequately trained personnel? What leadership is needed? When…

  1. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT.

    PubMed

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R; La Riviere, Patrick J; Alessio, Adam M

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)(-1), cardiac output = 3, 5, 8 L min(-1)). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  2. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  3. Principles of microRNA Regulation Revealed Through Modeling microRNA Expression Quantitative Trait Loci

    PubMed Central

    Budach, Stefan; Heinig, Matthias; Marsico, Annalisa

    2016-01-01

    Extensive work has been dedicated to study mechanisms of microRNA-mediated gene regulation. However, the transcriptional regulation of microRNAs themselves is far less well understood, due to difficulties determining the transcription start sites of transient primary transcripts. This challenge can be addressed using expression quantitative trait loci (eQTLs) whose regulatory effects represent a natural source of perturbation of cis-regulatory elements. Here we used previously published cis-microRNA-eQTL data for the human GM12878 cell line, promoter predictions, and other functional annotations to determine the relationship between functional elements and microRNA regulation. We built a logistic regression model that classifies microRNA/SNP pairs into eQTLs or non-eQTLs with 85% accuracy; shows microRNA-eQTL enrichment for microRNA precursors, promoters, enhancers, and transcription factor binding sites; and depletion for repressed chromatin. Interestingly, although there is a large overlap between microRNA eQTLs and messenger RNA eQTLs of host genes, 74% of these shared eQTLs affect microRNA and host expression independently. Considering microRNA-only eQTLs we find a significant enrichment for intronic promoters, validating the existence of alternative promoters for intragenic microRNAs. Finally, in line with the GM12878 cell line derived from B cells, we find genome-wide association (GWA) variants associated to blood-related traits more likely to be microRNA eQTLs than random GWA and non-GWA variants, aiding the interpretation of GWA results. PMID:27260304

  4. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling.

    PubMed

    Dick, Daniel G; Maxwell, Erin E

    2015-07-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'. PMID:26156130

  5. Modeling autism-relevant behavioral phenotypes in rats and mice: Do 'autistic' rodents exist?

    PubMed

    Servadio, Michela; Vanderschuren, Louk J M J; Trezza, Viviana

    2015-09-01

    Autism spectrum disorders (ASD) are among the most severe developmental psychiatric disorders known today, characterized by impairments in communication and social interaction and stereotyped behaviors. However, no specific treatments for ASD are as yet available. By enabling selective genetic, neural, and pharmacological manipulations, animal studies are essential in ASD research. They make it possible to dissect the role of genetic and environmental factors in the pathogenesis of the disease, circumventing the many confounding variables present in human studies. Furthermore, they make it possible to unravel the relationships between altered brain function in ASD and behavior, and are essential to test new pharmacological options and their side-effects. Here, we first discuss the concepts of construct, face, and predictive validity in rodent models of ASD. Then, we discuss how ASD-relevant behavioral phenotypes can be mimicked in rodents. Finally, we provide examples of environmental and genetic rodent models widely used and validated in ASD research. We conclude that, although no animal model can capture, at once, all the molecular, cellular, and behavioral features of ASD, a useful approach is to focus on specific autism-relevant behavioral features to study their neural underpinnings. This approach has greatly contributed to our understanding of this disease, and is useful in identifying new therapeutic targets. PMID:26226143

  6. Quantitative genetic models for describing simultaneous and recursive relationships between phenotypes.

    PubMed Central

    Gianola, Daniel; Sorensen, Daniel

    2004-01-01

    Multivariate models are of great importance in theoretical and applied quantitative genetics. We extend quantitative genetic theory to accommodate situations in which there is linear feedback or recursiveness between the phenotypes involved in a multivariate system, assuming an infinitesimal, additive, model of inheritance. It is shown that structural parameters defining a simultaneous or recursive system have a bearing on the interpretation of quantitative genetic parameter estimates (e.g., heritability, offspring-parent regression, genetic correlation) when such features are ignored. Matrix representations are given for treating a plethora of feedback-recursive situations. The likelihood function is derived, assuming multivariate normality, and results from econometric theory for parameter identification are adapted to a quantitative genetic setting. A Bayesian treatment with a Markov chain Monte Carlo implementation is suggested for inference and developed. When the system is fully recursive, all conditional posterior distributions are in closed form, so Gibbs sampling is straightforward. If there is feedback, a Metropolis step may be embedded for sampling the structural parameters, since their conditional distributions are unknown. Extensions of the model to discrete random variables and to nonlinear relationships between phenotypes are discussed. PMID:15280252

  7. Modeling Magnetite Reflectance Spectra Using Hapke Theory and Existing Optical Constants

    NASA Technical Reports Server (NTRS)

    Roush, T. L.; Blewett, D. T.; Cahill, J. T. S.

    2016-01-01

    Magnetite is an accessory mineral found in terrestrial environments, some meteorites, and the lunar surface. The reflectance of magnetite powers is relatively low [1], and this property makes it an analog for other dark Fe- or Ti-bearing components, particularly ilmenite on the lunar surface. The real and imaginary indices of refraction (optical constants) for magnetite are available in the literature [2-3], and online [4]. Here we use these values to calculate the reflectance of particulates and compare these model spectra to reflectance measurements of magnetite available on-line [5].

  8. Post-Hoc Pattern-Oriented Testing and Tuning of an Existing Large Model: Lessons from the Field Vole

    PubMed Central

    Topping, Christopher J.; Dalkvist, Trine; Grimm, Volker

    2012-01-01

    Pattern-oriented modeling (POM) is a general strategy for modeling complex systems. In POM, multiple patterns observed at different scales and hierarchical levels are used to optimize model structure, to test and select sub-models of key processes, and for calibration. So far, POM has been used for developing new models and for models of low to moderate complexity. It remains unclear, though, whether the basic idea of POM to utilize multiple patterns, could also be used to test and possibly develop existing and established models of high complexity. Here, we use POM to test, calibrate, and further develop an existing agent-based model of the field vole (Microtus agrestis), which was developed and tested within the ALMaSS framework. This framework is complex because it includes a high-resolution representation of the landscape and its dynamics, of the individual’s behavior, and of the interaction between landscape and individual behavior. Results of fitting to the range of patterns chosen were generally very good, but the procedure required to achieve this was long and complicated. To obtain good correspondence between model and the real world it was often necessary to model the real world environment closely. We therefore conclude that post-hoc POM is a useful and viable way to test a highly complex simulation model, but also warn against the dangers of over-fitting to real world patterns that lack details in their explanatory driving factors. To overcome some of these obstacles we suggest the adoption of open-science and open-source approaches to ecological simulation modeling. PMID:23049882

  9. Water Use Conservation Scenarios for the Mississippi Delta Using an Existing Regional Groundwater Flow Model

    NASA Astrophysics Data System (ADS)

    Barlow, J. R.; Clark, B. R.

    2010-12-01

    The alluvial plain in northwestern Mississippi, locally referred to as the Delta, is a major agricultural area, which contributes significantly to the economy of Mississippi. Land use in this area can be greater than 90 percent agriculture, primarily for growing catfish, corn, cotton, rice, and soybean. Irrigation is needed to smooth out the vagaries of climate and is necessary for the cultivation of rice and for the optimization of corn and soybean. The Mississippi River Valley alluvial (MRVA) aquifer, which underlies the Delta, is the sole source of water for irrigation, and over use of the aquifer has led to water-level declines, particularly in the central region. The Yazoo-Mississippi-Delta Joint Water Management District (YMD), which is responsible for water issues in the 17-county area that makes up the Delta, is directing resources to reduce the use of water through conservation efforts. The U.S. Geological Survey (USGS) recently completed a regional groundwater flow model of the entire Mississippi embayment, including the Mississippi Delta region, to further our understanding of water availability within the embayment system. This model is being used by the USGS to assist YMD in optimizing their conservation efforts by applying various water-use reduction scenarios, either uniformly throughout the Delta, or in focused areas where there have been large groundwater declines in the MRVA aquifer.

  10. Design of a Representative Low Earth Orbit Satellite to Improve Existing Debris Models

    NASA Technical Reports Server (NTRS)

    Clark, S.; Dietrich, A.; Werremeyer, M.; Fitz-Coy, N.; Liou, J.-C.

    2012-01-01

    This paper summarizes the process and methodologies used in the design of a small-satellite, DebriSat, that represents materials and construction methods used in modern day Low Earth Orbit (LEO) satellites. This satellite will be used in a future hypervelocity impact test with the overall purpose to investigate the physical characteristics of modern LEO satellites after an on-orbit collision. The major ground-based satellite impact experiment used by DoD and NASA in their development of satellite breakup models was conducted in 1992. The target used for that experiment was a Navy Transit satellite (40 cm, 35 kg) fabricated in the 1960 s. Modern satellites are very different in materials and construction techniques from a satellite built 40 years ago. Therefore, there is a need to conduct a similar experiment using a modern target satellite to improve the fidelity of the satellite breakup models. The design of DebriSat will focus on designing and building a next-generation satellite to more accurately portray modern satellites. The design of DebriSat included a comprehensive study of historical LEO satellite designs and missions within the past 15 years for satellites ranging from 10 kg to 5000 kg. This study identified modern trends in hardware, material, and construction practices utilized in recent LEO missions, and helped direct the design of DebriSat.

  11. Quantitative Genetics and Functional–Structural Plant Growth Models: Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization

    PubMed Central

    Letort, Véronique; Mahe, Paul; Cournède, Paul-Henry; de Reffye, Philippe; Courtois, Brigitte

    2008-01-01

    Background and Aims Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype × environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional–structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods The GREENLAB model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings of the species-specific parameters of the model. The QTL Cartographer software was used to study QTL detection of simulated plant traits. A genetic algorithm was implemented to define the ideotype for yield maximization based on the model parameters and the associated allelic combination. Key Results and Conclusions By keeping the environmental factors constant and using a virtual population with a large number of individuals generated by a Mendelian genetic model, results for an ideal case could be simulated. Virtual QTL detection was compared in the case of phenotypic traits – such as cob weight – and when traits were model parameters, and was found to be more accurate in the latter case. The practical interest of this approach is illustrated by calculating the parameters (and the corresponding genotype) associated with yield optimization of a GREENLAB maize model. The paper discusses the potentials of GREENLAB to represent environment × genotype

  12. Existence of a metallic phase in a 1D Holstein Hubbard model at half filling

    NASA Astrophysics Data System (ADS)

    Krishna, Phani Murali; Chatterjee, Ashok

    2007-06-01

    The one-dimensional half-filled Holstein-Hubbard model is studied using a series of canonical transformations including phonon coherence effect that partly depends on the electron density and is partly independent and also incorporating the on-site and the nearest-neighbour phonon correlations and the exact Bethe-ansatz solution of Lieb and Wu. It is shown that choosing a better variational phonon state makes the polarons more mobile and widens the intermediate metallic region at the charge-density-wave-spin-density-wave crossover recently predicted by Takada and Chatterjee. The presence of this metallic phase is indeed a favourable situation from the point of view of high temperature superconductivity.

  13. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling

    PubMed Central

    Dick, Daniel G.; Maxwell, Erin E.

    2015-01-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the ‘migration model’. PMID:26156130

  14. Existence of limit cycles and homoclinic bifurcation in a plant-herbivore model with toxin-determined functional response

    NASA Astrophysics Data System (ADS)

    Zhao, Yulin; Feng, Zhilan; Zheng, Yiqiang; Cen, Xiuli

    2015-04-01

    In this paper we study a two-dimensional toxin-determined functional response model (TDFRM). The toxin-determined functional response explicitly takes into consideration the reduction in the consumption of plants by herbivore due to chemical defense, which generates more complex dynamics of the plant-herbivore interactions. The purpose of the present paper is to analyze the existence of limit cycles and bifurcations of the model. By applying the theories of rotated vector fields and the extended planar termination principle, we establish the conditions for the existence of limit cycles and homoclinic loop. It is shown that a limit cycle is generated in a supercritical Hopf bifurcation and terminated in a homoclinic bifurcation, as the parameters vary. Analytic proofs are provided for all results, which generalize the results presented in [11].

  15. A systematic review of the existing models of disordered eating: Do they inform the development of effective interventions?

    PubMed

    Pennesi, Jamie-Lee; Wade, Tracey D

    2016-02-01

    Despite significant advances in the development of prevention and treatment interventions for eating disorders and disordered eating over the last decade, there still remains a pressing need to develop more effective interventions. In line with the 2008 Medical Research Council (MRC) evaluation framework from the United Kingdom for the development and evaluation of complex interventions to improve health, the development of sound theory is a necessary precursor to the development of effective interventions. The aim of the current review was to identify the existing models for disordered eating and to identify those models which have helped inform the development of interventions for disordered eating. In addition, we examine the variables that most commonly appear across these models, in terms of future implications for the development of interventions for disordered eating. While an extensive range of theoretical models for the development of disordered eating were identified (N=54), only ten (18.5%) had progressed beyond mere description and to the development of interventions that have been evaluated. It is recommended that future work examines whether interventions in eating disorders increase in efficacy when developed in line with theoretical considerations, that initiation of new models gives way to further development of existing models, and that there be greater utilisation of intervention studies to inform the development of theory. PMID:26781985

  16. Quantitative assignment of reaction directionality in constraint-based models of metabolism: Application to Escherichia coli

    PubMed Central

    Fleming, R.M.T.; Thiele, I.; Nasheuer, H.P.

    2009-01-01

    Constraint based modeling is an approach for quantitative prediction of net reaction flux in genome scale biochemical networks. In vivo, the second law of thermodynamics requires that net macroscopic flux be forward, when the transformed reaction Gibbs energy is negative. We calculate the latter by using (i) group contribution estimates of metabolite species Gibbs energy, combined with (ii) experimentally measured equilibrium constants. In an application to a genome scale stoichiometric model of E. coli metabolism, iAF1260, we demonstrate that quantitative prediction of reaction directionality is increased in scope and accuracy by integration of both data sources, transformed appropriately to in vivo pH, temperature and ionic strength. Comparison of quantitative versus qualitative assignment of reaction directionality in iAF1260, assuming an accommodating reactant concentration range of 0.02 – 20 mM, revealed that quantitative assignment leads to a low false positive, but high false negative, prediction of effectively irreversible reactions. The latter is partly due to the uncertainty associated with group contribution estimates. We also uncovered evidence that the high intracellular concentration of glutamate in E. coli may be essential to direct otherwise thermodynamically unfavorable essential reactions, such as the leucine transaminase reaction, in an anabolic direction. PMID:19783351

  17. 18FDG synthesis and supply: a journey from existing centralized to future decentralized models.

    PubMed

    Uz Zaman, Maseeh; Fatima, Nosheen; Sajjad, Zafar; Zaman, Unaiza; Tahseen, Rabia; Zaman, Areeba

    2014-01-01

    Positron emission tomography (PET) as the functional component of current hybrid imaging (like PET/ CT or PET/MRI) seems to dominate the horizon of medical imaging in coming decades. 18Flourodeoxyglucose (18FDG) is the most commonly used probe in oncology and also in cardiology and neurology around the globe. However, the major capital cost and exorbitant running expenditure of low to medium energy cyclotrons (about 20 MeV) and radiochemistry units are the seminal reasons of low number of cyclotrons but mushroom growth pattern of PET scanners. This fact and longer half-life of 18F (110 minutes) have paved the path of a centralized model in which 18FDG is produced by commercial PET radiopharmacies and the finished product (multi-dose vial with tungsten shielding) is dispensed to customers having only PET scanners. This indeed reduced the cost but has limitations of dependence upon timely arrival of daily shipments as delay caused by any reason results in cancellation or rescheduling of the PET procedures. In recent years, industry and academia have taken a step forward by producing low energy, table top cyclotrons with compact and automated radiochemistry units (Lab- on-Chip). This decentralized strategy enables the users to produce on-demand doses of PET probe themselves at reasonably low cost using an automated and user-friendly technology. This technological development would indeed provide a real impetus to the availability of complete set up of PET based molecular imaging at an affordable cost to the developing countries. PMID:25556425

  18. Evaluation of Modeled and Measured Energy Savings in Existing All Electric Public Housing in the Pacific Northwest

    SciTech Connect

    Gordon, Andrew; Lubliner, Michael; Howard, Luke; Kunkle, Rick; Salzberg, Emily

    2014-04-01

    This project analyzes the cost effectiveness of energy savings measures installed by a large public housing authority in Salishan, a community in Tacoma Washington. Research focuses on the modeled and measured energy usage of the first six phases of construction, and compares the energy usage of those phases to phase 7. Market-ready energy solutions were also evaluated to improve the efficiency of affordable housing for new and existing (built since 2001) affordable housing in the marine climate of Washington State.

  19. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    PubMed

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. PMID:21470169

  20. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum

    PubMed Central

    Milic, Natasa M.; Trajkovic, Goran Z.; Bukumiric, Zoran M.; Cirkovic, Andja; Nikolic, Ivan M.; Milin, Jelena S.; Milic, Nikola V.; Savic, Marko D.; Corac, Aleksandar M.; Marinkovic, Jelena M.; Stanisavljevic, Dejana M.

    2016-01-01

    Background Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. Methods This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013–14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Results Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001). Conclusion This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient

  1. Testing the influence of vertical, pre-existing joints on normal faulting using analogue and 3D discrete element models (DEM)

    NASA Astrophysics Data System (ADS)

    Kettermann, Michael; von Hagke, Christoph; Virgo, Simon; Urai, Janos L.

    2015-04-01

    Brittle rocks are often affected by different generations of fractures that influence each other. We study pre-existing vertical joints followed by a faulting event. Understanding the effect of these interactions on fracture/fault geometries as well as the development of dilatancy and the formation of cavities as potential fluid pathways is crucial for reservoir quality prediction and production. Our approach combines scaled analogue and numerical modeling. Using cohesive hemihydrate powder allows us to create open fractures prior to faulting. The physical models are reproduced using the ESyS-Particle discrete element Modeling Software (DEM), and different parameters are investigated. Analogue models were carried out in a manually driven deformation box (30x28x20 cm) with a 60° dipping pre-defined basement fault and 4.5 cm of displacement. To produce open joints prior to faulting, sheets of paper were mounted in the box to a depth of 5 cm at a spacing of 2.5 cm. Powder was then sieved into the box, embedding the paper almost entirely (column height of 19 cm), and the paper was removed. We tested the influence of different angles between the strike of the basement fault and the joint set (0°, 4°, 8°, 12°, 16°, 20°, and 25°). During deformation we captured structural information by time-lapse photography that allows particle imaging velocimetry analyses (PIV) to detect localized deformation at every increment of displacement. Post-mortem photogrammetry preserves the final 3-dimensional structure of the fault zone. We observe that no faults or fractures occur parallel to basement-fault strike. Secondary fractures are mostly oriented normal to primary joints. At the final stage of the experiments we analyzed semi-quantitatively the number of connected joints, number of secondary fractures, degree of segmentation (i.e. number of joints accommodating strain), damage zone width, and the map-view area fraction of open gaps. Whereas the area fraction does not change

  2. Improving the quantitative accuracy of cerebral oxygen saturation in monitoring the injured brain using atlas based Near Infrared Spectroscopy models.

    PubMed

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J E; Su, Zhangjie; Dehghani, Hamid

    2016-08-01

    The application of Near Infrared Spectroscopy (NIRS) for the monitoring of the cerebral oxygen saturation within the brain is well established, albeit using temporal data that can only measure relative changes of oxygenation state of the brain from a baseline. The focus of this investigation is to demonstrate that hybridisation of existing near infrared probe designs and reconstruction techniques can pave the way to produce a system and methods that can be used to monitor the absolute oxygen saturation in the injured brain. Using registered Atlas models in simulation, a novel method is outlined by which the quantitative accuracy and practicality of NIRS for specific use in monitoring the injured brain, can be improved, with cerebral saturation being recovered to within 10.1 ± 1.8% of the expected values. PMID:27003677

  3. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-17

    ... HUMAN SERVICES Food and Drug Administration Use of Influenza Disease Models To Quantitatively Evaluate... public workshop entitled: ``Use of Influenza Disease Models to Quantitatively Evaluate the Benefits and... hypothetical influenza vaccine, and to seek from a range of experts, feedback on the current version of...

  4. Quantitative explanation of circuit experiments and real traffic using the optimal velocity model

    NASA Astrophysics Data System (ADS)

    Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi

    2016-04-01

    We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.

  5. Experimentally validated quantitative linear model for the device physics of elastomeric microfluidic valves

    NASA Astrophysics Data System (ADS)

    Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French

    2007-03-01

    A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.

  6. Modelling Activities In Kinematics Understanding quantitative relations with the contribution of qualitative reasoning

    NASA Astrophysics Data System (ADS)

    Orfanos, Stelios

    2010-01-01

    In Greek traditional teaching a lot of significant concepts are introduced with a sequence that does not provide the students with all the necessary information required to comprehend. We consider that understanding concepts and the relations among them is greatly facilitated by the use of modelling tools, taking into account that the modelling process forces students to change their vague, imprecise ideas into explicit causal relationships. It is not uncommon to find students who are able to solve problems by using complicated relations without getting a qualitative and in-depth grip on them. Researchers have already shown that students often have a formal mathematical and physical knowledge without a qualitative understanding of basic concepts and relations." The aim of this communication is to present some of the results of our investigation into modelling activities related to kinematical concepts. For this purpose, we have used ModellingSpace, an environment that was especially designed to allow students from eleven to seventeen years old to express their ideas and gradually develop them. The ModellingSpace enables students to build their own models and offers the choice of observing directly simulations of real objects and/or all the other alternative forms of representations (tables of values, graphic representations and bar-charts). The students -in order to answer the questions- formulate hypotheses, they create models, they compare their hypotheses with the representations of their models and they modify or create other models when their hypotheses did not agree with the representations. In traditional ways of teaching, students are educated to utilize formulas as the most important strategy. Several times the students recall formulas in order to utilize them, without getting an in-depth understanding on them. Students commonly use the quantitative type of reasoning, since it is primarily used in teaching, although it may not be fully understood by them

  7. PHYSIOLOGICALLY-BASED PHARMACOKINETIC ( PBPK ) MODEL FOR METHYL TERTIARY BUTYL ETHER ( MTBE ): A REVIEW OF EXISTING MODELS

    EPA Science Inventory

    MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...

  8. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution

    PubMed Central

    Nielsen, Rasmus

    2015-01-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  9. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model

    NASA Astrophysics Data System (ADS)

    Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa

    We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.

  10. The optimal hyperspectral quantitative models for chlorophyll-a of chlorella vulgaris

    NASA Astrophysics Data System (ADS)

    Cheng, Qian; Wu, Xiuju

    2009-09-01

    Chlorophyll-a of Chlorella vulgaris had been related with spectrum. Based on hyperspectral measurement for Chlorella vulgaris, the hyperspectral characteristics of Chlorella vulgaris and their optimal hyperspectral quantitative models of chlorophyll-a (Chla) estimation were researched in situ experiment. The results showed that the optimal hyperspectral quantitative model of Chlorella vulgaris was Chla=180.5+1125787(R700)'+2.4 *109[(R700)']2 (P0<.01), and the suitability order of corresponding methods was spectral ratio

  11. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    PubMed Central

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  12. The role of pre-existing disturbances in the effect of marine reserves on coastal ecosystems: a modelling approach.

    PubMed

    Savina, Marie; Condie, Scott A; Fulton, Elizabeth A

    2013-01-01

    We have used an end-to-end ecosystem model to explore responses over 30 years to coastal no-take reserves covering up to 6% of the fifty thousand square kilometres of continental shelf and slope off the coast of New South Wales (Australia). The model is based on the Atlantis framework, which includes a deterministic, spatially resolved three-dimensional biophysical model that tracks nutrient flows through key biological groups, as well as extraction by a range of fisheries. The model results support previous empirical studies in finding clear benefits of reserves to top predators such as sharks and rays throughout the region, while also showing how many of their major prey groups (including commercial species) experienced significant declines. It was found that the net impact of marine reserves was dependent on the pre-existing levels of disturbance (i.e. fishing pressure), and to a lesser extent on the size of the marine reserves. The high fishing scenario resulted in a strongly perturbed system, where the introduction of marine reserves had clear and mostly direct effects on biomass and functional biodiversity. However, under the lower fishing pressure scenario, the introduction of marine reserves caused both direct positive effects, mainly on shark groups, and indirect negative effects through trophic cascades. Our study illustrates the need to carefully align the design and implementation of marine reserves with policy and management objectives. Trade-offs may exist not only between fisheries and conservation objectives, but also among conservation objectives. PMID:23593432

  13. The Role of Pre-Existing Disturbances in the Effect of Marine Reserves on Coastal Ecosystems: A Modelling Approach

    PubMed Central

    Savina, Marie; Condie, Scott A.; Fulton, Elizabeth A.

    2013-01-01

    We have used an end-to-end ecosystem model to explore responses over 30 years to coastal no-take reserves covering up to 6% of the fifty thousand square kilometres of continental shelf and slope off the coast of New South Wales (Australia). The model is based on the Atlantis framework, which includes a deterministic, spatially resolved three-dimensional biophysical model that tracks nutrient flows through key biological groups, as well as extraction by a range of fisheries. The model results support previous empirical studies in finding clear benefits of reserves to top predators such as sharks and rays throughout the region, while also showing how many of their major prey groups (including commercial species) experienced significant declines. It was found that the net impact of marine reserves was dependent on the pre-existing levels of disturbance (i.e. fishing pressure), and to a lesser extent on the size of the marine reserves. The high fishing scenario resulted in a strongly perturbed system, where the introduction of marine reserves had clear and mostly direct effects on biomass and functional biodiversity. However, under the lower fishing pressure scenario, the introduction of marine reserves caused both direct positive effects, mainly on shark groups, and indirect negative effects through trophic cascades. Our study illustrates the need to carefully align the design and implementation of marine reserves with policy and management objectives. Trade-offs may exist not only between fisheries and conservation objectives, but also among conservation objectives. PMID:23593432

  14. Evaluation of existing limited sampling models for busulfan kinetics in children with beta thalassaemia major undergoing bone marrow transplantation.

    PubMed

    Balasubramanian, P; Chandy, M; Krishnamoorthy, R; Srivastava, A

    2001-11-01

    Busulfan pharmacokinetic parameters are useful in predicting the outcome of allogeneic bone marrow transplantation (BMT). Standard pharmacokinetic measurements require multiple blood samples. Various limited sampling models (LSM) have been proposed for reducing the sample number required for these measurements, essentially for patients with malignant disorders undergoing BMT. This study was undertaken to evaluate the existing LSM for busulfan pharmacokinetics to find out the most suitable method for patients with thalassaemia major undergoing BMT. Busulfan levels in plasma samples were analysed by HPLC. The AUC calculated by non-compartmental analysis using the program 'TOPFIT' was compared with previously published LSMs. Our seven sample pharmacokinetic data for AUC calculation was compared with the published LSMs. The three sample models suggested by Chattergoon et al and Schuler et al showed significant agreement with AUC TOPFIT (R(2) = 0.98 and 0.94, respectively) in our clinical context. Other models resulted in significant over or under representation of observed values (Vassal's model R(2) = 0.61; Chattergoon's two sample model R(2) = 0.84; four sample model R(2) = 0.83; Schuler's two sample model R(2) = 0.79). By these data the three sample LSM proposed by Chattergoon et al and Schuler et al are suitable for calculation of the AUC in patients with thalassaemia major undergoing BMT conditioned with oral busulfan. PMID:11781641

  15. A revised Fisher model on analysis of quantitative trait loci with multiple alleles

    PubMed Central

    Wang, Tao

    2014-01-01

    Zeng et al. (2005) proposed a general two-allele (G2A) model to model bi-allelic quantitative trait loci (QTL). Comparing with the classical Fisher model, the G2A model can avoid using redundant parameters and be fitted directly using standard least square (LS) approach. In this study, we further extend the G2A model to general multi-allele (GMA) model. First, we propose a one-locus GMA model for phase known genotypes based on modeling the inheritance of paternal and maternal alleles. Next, we develop a one-locus GMA model for phase unknown genotypes by treating it as a special case of the phase known one-locus GMA model. Thirdly, we extend the one-locus GMA models to multiple loci. We discuss how the genetic variance components can be analyzed using these GMA models in equilibrium as well as disequilibrium populations. Finally, we apply the GMA model to a published experimental data set. PMID:25309580

  16. A quantitative dynamic systems model of health-related quality of life among older adults

    PubMed Central

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  17. A quantitative dynamic systems model of health-related quality of life among older adults.

    PubMed

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  18. Exploring Existence Value

    NASA Astrophysics Data System (ADS)

    Madariaga, Bruce; McConnell, Kenneth E.

    1987-05-01

    The notion that individuals value the preservation of water resources independent of their own use of these resources is discussed. Issues in defining this value, termed "existence value," are explored. Economic models are employed to assess the role of existence value in benefit-cost analysis. The motives underlying existence value are shown to matter to contingent valuation measurement of existence benefits. A stylized contingent valuation experiment is used to study nonusers' attitudes regarding projects to improve water quality in the Chesapeake Bay. Survey results indicate that altruism is one of the motives underlying existence value and that goods other than environmental and natural resources may provide existence benefits.

  19. Earth conical shadow modeling for LEO satellite using reference frame transformation technique: A comparative study with existing earth conical shadow models

    NASA Astrophysics Data System (ADS)

    Srivastava, V. K.; Yadav, S. M.; Ashutosh; Kumar, J.; Kushvah, B. S.; Ramakrishna, B. N.; Ekambram, P.

    2015-03-01

    In this article, we propose an Earth conical shadow model predicting umbra and penumbra states for the low Earth orbiting satellite considering the spherical shape of the Earth. The model is described using the umbra and penumbra cone geometries of the Earth's shadow and the geometrical equations of these conical shadow regions into a Sun centered frame. The proposed model is simulated for three polar Sun-synchronous Indian Remote Sensing satellites: Cartosat-2A, Resourcesat-2 and Oceansat-2. The proposed model compares well with the existing spherical Earth conical shadow models such as those given by Vallado (2013), Wertz (2002), Hubaux et al. (2012), and Srivastava et al. (2013, 2014). An assessment is carried out of the existing Earth conical shadow models with Systems Tool Kit (STK), a high fidelity commercial software package of Analytic Graphic Inc., and the real time telemetry data.

  20. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA

    PubMed Central

    Morris, Jeffrey S.; Baladandayuthapani, Veerabhadran; Herrick, Richard C.; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  1. The Analysis of Quantitative Traits for Simple Genetic Models from Parental, F1 and Backcross Data

    PubMed Central

    Elston, R. C.; Stewart, John

    1973-01-01

    The following models are considered for the genetic determination of quantitative traits: segregation at one locus, at two linked loci, at any number of equal and additive unlinked loci, and at one major locus and an indefinite number of equal and additive loci. In each case an appropriate likelihood is given for data on parental, F1 and backcross individuals, assuming that the environmental variation is normally distributed. Methods of testing and comparing the various models are presented, and methods are suggested for the simultaneous analysis of two or more traits. PMID:4711900

  2. Impact of Model Uncertainties on Quantitative Analysis of FUV Auroral Images: Peak Production Height

    NASA Technical Reports Server (NTRS)

    Germany, G. A.; Lummerzheim, D.; Parks, G. K.; Brittnacher, M. J.; Spann, James F., Jr.; Richards, Phil G.

    1999-01-01

    We demonstrate that small uncertainties in the modeled height of peak production for FUV emissions can lead to significant uncertainties in the analysis of these sai-ne emissions. In particular, an uncertainty of only 3 km in the peak production height can lead to a 50% uncertainty in the mean auroral energy deduced from the images. This altitude uncertainty is comparable to differences in different auroral deposition models currently used for UVI analysis. Consequently, great care must be taken in quantitative photometric analysis and interpretation of FUV auroral images.

  3. Linear and nonlinear quantitative structure-property relationship modelling of skin permeability.

    PubMed

    Khajeh, A; Modarress, H

    2014-01-01

    In this work, quantitative structure-property relationship (QSPR) models were developed to estimate skin permeability based on theoretically derived molecular descriptors and a diverse set of experimental data. The newly developed method combining modified particle swarm optimization (MPSO) and multiple linear regression (MLR) was used to select important descriptors and develop the linear model using a training set of 225 compounds. The adaptive neuro-fuzzy inference system (ANFIS) was used as an efficient nonlinear method to correlate the selected descriptors with experimental skin permeability data (log Kp). The linear and nonlinear models were assessed by internal and external validation. The obtained models with three descriptors show good predictive ability for the test set, with coefficients of determination for the MPSO-MLR and ANFIS models equal to 0.874 and 0.890, respectively. The QSPR study suggests that hydrophobicity (encoded as log P) is the most important factor in transdermal penetration. PMID:24090175

  4. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    PubMed Central

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  5. Multiplicative effects model with internal standard in mobile phase for quantitative liquid chromatography-mass spectrometry.

    PubMed

    Song, Mi; Chen, Zeng-Ping; Chen, Yao; Jin, Jing-Wen

    2014-07-01

    Liquid chromatography-mass spectrometry assays suffer from signal instability caused by the gradual fouling of the ion source, vacuum instability, aging of the ion multiplier, etc. To address this issue, in this contribution, an internal standard was added into the mobile phase. The internal standard was therefore ionized and detected together with the analytes of interest by the mass spectrometer to ensure that variations in measurement conditions and/or instrument have similar effects on the signal contributions of both the analytes of interest and the internal standard. Subsequently, based on the unique strategy of adding internal standard in mobile phase, a multiplicative effects model was developed for quantitative LC-MS assays and tested on a proof of concept model system: the determination of amino acids in water by LC-MS. The experimental results demonstrated that the proposed method could efficiently mitigate the detrimental effects of continuous signal variation, and achieved quantitative results with average relative predictive error values in the range of 8.0-15.0%, which were much more accurate than the corresponding results of conventional internal standard method based on the peak height ratio and partial least squares method (their average relative predictive error values were as high as 66.3% and 64.8%, respectively). Therefore, it is expected that the proposed method can be developed and extended in quantitative LC-MS analysis of more complex systems. PMID:24840455

  6. Quantitative Multi-Parametric PROPELLER MRI of Diethylnitrosamine-Induced Hepatocarcinogenesis in Wister Rat model

    PubMed Central

    Deng, Jie; Jin, Ning; Yin, Xiaoming; Yang, Guang-Yu; Zhang, Zhuoli; Omary, Reed A.; Larson, Andrew C.

    2010-01-01

    PURPOSE To develop a quantitative multi-parametric PROPELLER (periodically rotated overlapping parallel lines with enhanced reconstruction) MRI approach and its application in a diethylnitrosamine (DEN) chemically-induced rodent model of hepatocarcinogensis for lesion characterization. MATERIALS AND METHODS In nine rabbits with 33 cirrhosis-associated hepatic nodules including regenerative nodule (RN), dysplastic nodule (DN), hepatocellular carcinoma (HCC) and cyst, multi-parametric PROPELLER MRI (diffusion-weighted, T2/M0 (proton density) mapping and T1-weighted) were performed. Apparent diffusion coefficient (ADC) maps, T2 and M0 maps of each tumor were generated. We compared ADC, T2 and M0 measurements for each type of hepatic nodule, confirmed at histopathology. RESULTS PROPELLER images and resultant parametric maps were inherently co-registered without image distortion or motion artifacts. All types of hepatic nodules demonstrated complex imaging characteristics within conventional T1- and T2-weighted images. Quantitatively, cysts were distinguished from RN, DN and HCC with significantly higher ADC and T2; however, there was no significant difference of ADC and T2 between HCC, DN and RN. Mean tumor M0 values of HCC were significantly higher than those of DN, RN and cysts. CONCLUSION This study exploited quantitative PROPELLER MRI and multi-dimensional analysis approaches in an attempt to differentiate hepatic nodules in the DEN rodent model of hepatocarcinogensis. This method offers great potential for parallel parameterization during non-invasive interrogation of hepatic tissue properties. PMID:20432363

  7. Existence and asymptotics of traveling wave fronts for a delayed nonlocal diffusion model with a quiescent stage

    NASA Astrophysics Data System (ADS)

    Zhou, Kai; Lin, Yuan; Wang, Qi-Ru

    2013-11-01

    In this paper, we propose a delayed nonlocal diffusion model with a quiescent stage and study its dynamics. By using Schauder's fixed point theorem and upper-lower solution method, we establish the existence of traveling wave fronts for speed c⩾c∗(τ), where c∗(τ) is a critical value. With the method of Carr and Chmaj (PAMS, 2004), we discuss the asymptotic behavior of traveling wave fronts and then get the nonexistence of traveling wave fronts for c

  8. A quantitative model for flux flow resistivity and Nernst effect of vortex fluid in high-temperature superconductors

    NASA Astrophysics Data System (ADS)

    Li, Rong; She, Zhen-Su; Yin, Lan; State Key Laboratory for Turbulence; Complex Systems Team

    Transport properties of vortex fluid in high-temperature superconductors have been described in terms of viscous dynamics of magnetic and thermal vortices. We have constructed a quantitative model by extending the Bardeen-Stephen model of damping viscosity to include the contributions of flux pinning in low temperature and vortex-vortex interaction in high magnetic field. A uniformly accurate description of flux flow resistivity and Nernst signal is achieved for empirical data over a wide range of temperature and magnetic field strength. A discrepancy of three orders of magnitude between data and Anderson model of Nernst signal is pointed out, suggesting the existence of anomalous transport in high-temperature superconductor beyond mere quantum and thermal fluctuations. The model enables to derive a set of physical parameters characterizing the vortex dynamics from the Nernst signal, as we illustrate with an analysis of six samples of Bi2Sr2-yLayCuO6 and Bi2Sr2CaCu2O8+δ.

  9. D-Factor: A Quantitative Model of Application Slow-Down in Multi-Resource Shared Systems

    SciTech Connect

    Lim, Seung-Hwan; Huh, Jae-Seok; Kim, Youngjae; Shipman, Galen M; Das, Chita

    2012-01-01

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price - resource contention among jobs increases job completion time. In this paper, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We also show that the model can be integrated with an existing on-line scheduler to minimize the makespan of workloads.

  10. New Tools and the Road to Quantitative Models in Sedimentary Provenance Analysis

    NASA Astrophysics Data System (ADS)

    von Eynatten, Hilmar

    2010-05-01

    Sedimentary provenance analysis is one of the major techniques to link source area geology, climate evolution, and basin dynamics to the compositional characteristics of the clastic basin fill. The high potential of sediments for precise chronostratigraphic calibration in combination with state-of-the-art provenance analysis allows for detailed reconstruction of source area evolution in space and time. A wealth of new and/or refined analytical techniques has been developed in the last decade, especially regarding high-precision single-grain geochemical and geochronological techniques. Accordingly, ultrastable heavy minerals such as rutile or zircon provide inert mineral tracers in sedimentary systems and their analysis yield precise information on source rock petrology and chronology. In terms of quantitative provenance analysis there is, however, a strong need for connecting these detailed information on specific source rocks to the bulk mass transfer and sediment modification from source to sink. Such quantitative provenance models are still in their infancy for a number of reasons, among them (1) the overall complexity of the processes involved including multiple feedback mechanisms, (2) the heterogeneity of data bases with respect to large-scale basin-wide studies, and (3) the lack of tailor-made and user-friendly statistical-numerical models allowing for both forward and inverse modelling that consider the compositional nature of most bulk sediment data. First steps towards fully quantitative models include (i) development of algorithms relating petrographic-mineralogic and geochemical data to sediment grain size, (ii) quantifying chemical, physical, and biological processes and their impact on sediment production and modification, (iii) compositional mixture models, and (iv) verifying these analytical modules in large-scale modern systems, followed by (v) similar ancient systems that are even more complicated due to diagenetic processes.

  11. Downscaling SSPs in Bangladesh - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    NASA Astrophysics Data System (ADS)

    Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.

    2015-12-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  12. Downscaling SSPs in the GBM Delta - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    NASA Astrophysics Data System (ADS)

    Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila

    2016-04-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  13. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    PubMed

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. PMID:23892022

  14. Extraction, separation and quantitative structure-retention relationship modeling of essential oils in three herbs.

    PubMed

    Wei, Yuhui; Xi, Lili; Chen, Dongxia; Wu, Xin'an; Liu, Huanxiang; Yao, Xiaojun

    2010-07-01

    The essential oils extracted from three kinds of herbs were separated by a 5% phenylmethyl silicone (DB-5MS) bonded phase fused-silica capillary column and identified by MS. Seventy-four of the compounds identified were selected as origin data, and their chemical structure and gas chromatographic retention times (RT) were performed to build a quantitative structure-retention relationship model by genetic algorithm and multiple linear regressions analysis. The predictive ability of the model was verified by internal validation (leave-one-out, fivefold, cross-validation and Y-scrambling). As for external validation, the model was also applied to predict the gas chromatographic RT of the 14 volatile compounds not used for model development from essential oil of Radix angelicae sinensis. The applicability domain was checked by the leverage approach to verify prediction reliability. The results obtained using several validations indicated that the best quantitative structure-retention relationship model was robust and satisfactory, could provide a feasible and effective tool for predicting the gas chromatographic RT of volatile compounds and could be also applied to help in identifying the compound with the same gas chromatographic RT. PMID:20506431

  15. Quantitative SHG imaging in osteoarthritis model mice, implying a diagnostic application.

    PubMed

    Kiyomatsu, Hiroshi; Oshima, Yusuke; Saitou, Takashi; Miyazaki, Tsuyoshi; Hikita, Atsuhiko; Miura, Hiromasa; Iimura, Tadahiro; Imamura, Takeshi

    2015-02-01

    Osteoarthritis (OA) restricts the daily activities of patients and significantly decreases their quality of life. The development of non-invasive quantitative methods for properly diagnosing and evaluating the process of degeneration of articular cartilage due to OA is essential. Second harmonic generation (SHG) imaging enables the observation of collagen fibrils in live tissues or organs without staining. In the present study, we employed SHG imaging of the articular cartilage in OA model mice ex vivo. Consequently, three-dimensional SHG imaging with successive image processing and statistical analyses allowed us to successfully characterize histopathological changes in the articular cartilage consistently confirmed on histological analyses. The quantitative SHG imaging technique presented in this study constitutes a diagnostic application of this technology in the setting of OA. PMID:25780732

  16. Mechanical Model Analysis for Quantitative Evaluation of Liver Fibrosis Based on Ultrasound Tissue Elasticity Imaging

    NASA Astrophysics Data System (ADS)

    Shiina, Tsuyoshi; Maki, Tomonori; Yamakawa, Makoto; Mitake, Tsuyoshi; Kudo, Masatoshi; Fujimoto, Kenji

    2012-07-01

    Precise evaluation of the stage of chronic hepatitis C with respect to fibrosis has become an important issue to prevent the occurrence of cirrhosis and to initiate appropriate therapeutic intervention such as viral eradication using interferon. Ultrasound tissue elasticity imaging, i.e., elastography can visualize tissue hardness/softness, and its clinical usefulness has been studied to detect and evaluate tumors. We have recently reported that the texture of elasticity image changes as fibrosis progresses. To evaluate fibrosis progression quantitatively on the basis of ultrasound tissue elasticity imaging, we introduced a mechanical model of fibrosis progression and simulated the process by which hepatic fibrosis affects elasticity images and compared the results with those clinical data analysis. As a result, it was confirmed that even in diffuse diseases like chronic hepatitis, the patterns of elasticity images are related to fibrous structural changes caused by hepatic disease and can be used to derive features for quantitative evaluation of fibrosis stage.

  17. A Model for Subglacial Flooding Along a Pre-Existing Hydrological Network during the Rapid Drainage of Supraglacial Lakes

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Tsai, V. C.

    2014-12-01

    Increasingly large numbers of supraglacial lakes form and drain every summer on the Greenland Ice Sheet. Presently, about 15% of the lakes drain rapidly within the timescale of a few hours, and the vertical discharge of water during these events may find a pre-existing subglacial hydrological network, particularly late in the melt season. Here, we present a model for subglacial flooding applied specifically to such circumstances. Given the short timescale of events, we treat ice and bed as purely elastic and assume that the fluid flow in the subglacial conduit is fully turbulent. We evaluate the effect of initial conduit opening, wi, on the rate of flood propagation and along-flow profiles of field variables. We find that floods propagate much faster, particularly in early times, for larger wi. For wi = 10 and 1 cm, for example, floods travel about 68% and 50% farther than in the fully coupled ice/bed scenario after 2 hours of drainage, respectively. Irrespective of the magnitude of wi, we also find that there exists a region of positive pressure gradient. This reversal of pressure gradient draws water in from the farfield and causes the conduit to narrow, respecting mass continuity. While the general shape of the profiles appears similar, greater conduit opening is found for larger wi. For wi = 10 and 1 cm, for example, the elastostatic conduit opening at the point of injection is about 1.39 and 1.26 times that of the fully coupled ice/bed scenario after 2 hours of drainage. The hypothesis of a pre-existing thin film of water is consistent with the spirit of contemporary state-of-the-art continuum models for subglacial hydrology. This also results in avoiding the pressure singularity, which is inherent in classical hydro-fracture models applied to fully coupled ice/bed scenarios, thus opening an avenue for integrating the likes of our model within continuum hydrological models. Furthermore, we foresee that the theory presented can be used to potentially infer

  18. Modelling CEC variations versus structural iron reduction levels in dioctahedral smectites. Existing approaches, new data and model refinements.

    PubMed

    Hadi, Jebril; Tournassat, Christophe; Ignatiadis, Ioannis; Greneche, Jean Marc; Charlet, Laurent

    2013-10-01

    A model was developed to describe how the 2:1 layer excess negative charge induced by the reduction of Fe(III) to Fe(II) by sodium dithionite buffered with citrate-bicarbonate is balanced and applied to nontronites. This model is based on new experimental data and extends structural interpretation introduced by a former model [36-38]. The 2:1 layer negative charge increase due to Fe(III) to Fe(II) reduction is balanced by an excess adsorption of cations in the clay interlayers and a specific sorption of H(+) from solution. Prevalence of one compensating mechanism over the other is related to the growing lattice distortion induced by structural Fe(III) reduction. At low reduction levels, cation adsorption dominates and some of the incorporated protons react with structural OH groups, leading to a dehydroxylation of the structure. Starting from a moderate reduction level, other structural changes occur, leading to a reorganisation of the octahedral and tetrahedral lattice: migration or release of cations, intense dehydroxylation and bonding of protons to undersaturated oxygen atoms. Experimental data highlight some particular properties of ferruginous smectites regarding chemical reduction. Contrary to previous assumptions, the negative layer charge of nontronites does not only increase towards a plateau value upon reduction. A peak is observed in the reduction domain. After this peak, the negative layer charge decreases upon extended reduction (>30%). The decrease is so dramatic that the layer charge of highly reduced nontronites can fall below that of its fully oxidised counterpart. Furthermore, the presence of a large amount of tetrahedral Fe seems to promote intense clay structural changes and Fe reducibility. Our newly acquired data clearly show that models currently available in the literature cannot be applied to the whole reduction range of clay structural Fe. Moreover, changes in the model normalising procedure clearly demonstrate that the investigated low

  19. Can we better use existing and emerging computing hardware to embed activity coefficient predictions in complex atmospheric aerosol models?

    NASA Astrophysics Data System (ADS)

    Topping, David; Alibay, Irfan; Ruske, Simon; Hindriksen, Vincent; Noisternig, Michael

    2016-04-01

    To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we show that comprehensive microphysical models from single particle to larger scales can be developed to encompass a complete state-of-the-art knowledge of aerosol chemical and process diversity. We focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method, profiling traditional coding strategies and those that exploit emerging hardware.

  20. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models

    PubMed Central

    Rieger, TR; Musante, CJ

    2016-01-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777

  1. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models.

    PubMed

    Allen, R J; Rieger, T R; Musante, C J

    2016-03-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777

  2. Linking Antisocial Behavior, Substance Use, and Personality: An Integrative Quantitative Model of the Adult Externalizing Spectrum

    PubMed Central

    Krueger, Robert F.; Markon, Kristian E.; Patrick, Christopher J.; Benning, Stephen D.; Kramer, Mark D.

    2008-01-01

    Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena. PMID:18020714

  3. Quantitative inverse modelling of a cylindrical object in the laboratory using ERT: An error analysis

    NASA Astrophysics Data System (ADS)

    Korteland, Suze-Anne; Heimovaara, Timo

    2015-03-01

    Electrical resistivity tomography (ERT) is a geophysical technique that can be used to obtain three-dimensional images of the bulk electrical conductivity of the subsurface. Because the electrical conductivity is strongly related to properties of the subsurface and the flow of water it has become a valuable tool for visualization in many hydrogeological and environmental applications. In recent years, ERT is increasingly being used for quantitative characterization, which requires more detailed prior information than a conventional geophysical inversion for qualitative purposes. In addition, the careful interpretation of measurement and modelling errors is critical if ERT measurements are to be used in a quantitative way. This paper explores the quantitative determination of the electrical conductivity distribution of a cylindrical object placed in a water bath in a laboratory-scale tank. Because of the sharp conductivity contrast between the object and the water, a standard geophysical inversion using a smoothness constraint could not reproduce this target accurately. Better results were obtained by using the ERT measurements to constrain a model describing the geometry of the system. The posterior probability distributions of the parameters describing the geometry were estimated with the Markov chain Monte Carlo method DREAM(ZS). Using the ERT measurements this way, accurate estimates of the parameters could be obtained. The information quality of the measurements was assessed by a detailed analysis of the errors. Even for the uncomplicated laboratory setup used in this paper, errors in the modelling of the shape and position of the electrodes and the shape of the domain could be identified. The results indicate that the ERT measurements have a high information content which can be accessed by the inclusion of prior information and the consideration of measurement and modelling errors.

  4. The use of mode of action information in risk assessment: quantitative key events/dose-response framework for modeling the dose-response for key events.

    PubMed

    Simon, Ted W; Simons, S Stoney; Preston, R Julian; Boobis, Alan R; Cohen, Samuel M; Doerrer, Nancy G; Fenner-Crisp, Penelope A; McMullin, Tami S; McQueen, Charlene A; Rowlands, J Craig

    2014-08-01

    The HESI RISK21 project formed the Dose-Response/Mode-of-Action Subteam to develop strategies for using all available data (in vitro, in vivo, and in silico) to advance the next-generation of chemical risk assessments. A goal of the Subteam is to enhance the existing Mode of Action/Human Relevance Framework and Key Events/Dose Response Framework (KEDRF) to make the best use of quantitative dose-response and timing information for Key Events (KEs). The resulting Quantitative Key Events/Dose-Response Framework (Q-KEDRF) provides a structured quantitative approach for systematic examination of the dose-response and timing of KEs resulting from a dose of a bioactive agent that causes a potential adverse outcome. Two concepts are described as aids to increasing the understanding of mode of action-Associative Events and Modulating Factors. These concepts are illustrated in two case studies; 1) cholinesterase inhibition by the pesticide chlorpyrifos, which illustrates the necessity of considering quantitative dose-response information when assessing the effect of a Modulating Factor, that is, enzyme polymorphisms in humans, and 2) estrogen-induced uterotrophic responses in rodents, which demonstrate how quantitative dose-response modeling for KE, the understanding of temporal relationships between KEs and a counterfactual examination of hypothesized KEs can determine whether they are Associative Events or true KEs. PMID:25070415

  5. The quantitative genetics of indirect genetic effects: a selective review of modelling issues.

    PubMed

    Bijma, P

    2014-01-01

    Indirect genetic effects (IGE) occur when the genotype of an individual affects the phenotypic trait value of another conspecific individual. IGEs can have profound effects on both the magnitude and the direction of response to selection. Models of inheritance and response to selection in traits subject to IGEs have been developed within two frameworks; a trait-based framework in which IGEs are specified as a direct consequence of individual trait values, and a variance-component framework in which phenotypic variance is decomposed into a direct and an indirect additive genetic component. This work is a selective review of the quantitative genetics of traits affected by IGEs, with a focus on modelling, estimation and interpretation issues. It includes a discussion on variance-component vs trait-based models of IGEs, a review of issues related to the estimation of IGEs from field data, including the estimation of the interaction coefficient Ψ (psi), and a discussion on the relevance of IGEs for response to selection in cases where the strength of interaction varies among pairs of individuals. An investigation of the trait-based model shows that the interaction coefficient Ψ may deviate considerably from the corresponding regression coefficient when feedback occurs. The increasing research effort devoted to IGEs suggests that they are a widespread phenomenon, probably particularly in natural populations and plants. Further work in this field should considerably broaden our understanding of the quantitative genetics of inheritance and response to selection in relation to the social organisation of populations. PMID:23512010

  6. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model

    PubMed Central

    Yoshioka, S.; Matsuhana, B.; Tanaka, S.; Inouye, Y.; Oshima, N.; Kinoshita, S.

    2011-01-01

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model. PMID:20554565

  7. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model.

    PubMed

    Yoshioka, S; Matsuhana, B; Tanaka, S; Inouye, Y; Oshima, N; Kinoshita, S

    2011-01-01

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model. PMID:20554565

  8. Quantitative analysis of anaerobic oxidation of methane (AOM) in marine sediments: A modeling perspective

    NASA Astrophysics Data System (ADS)

    Regnier, P.; Dale, A. W.; Arndt, S.; LaRowe, D. E.; Mogollón, J.; Van Cappellen, P.

    2011-05-01

    Recent developments in the quantitative modeling of methane dynamics and anaerobic oxidation of methane (AOM) in marine sediments are critically reviewed. The first part of the review begins with a comparison of alternative kinetic models for AOM. The roles of bioenergetic limitations, intermediate compounds and biomass growth are highlighted. Next, the key transport mechanisms in multi-phase sedimentary environments affecting AOM and methane fluxes are briefly treated, while attention is also given to additional controls on methane and sulfate turnover, including organic matter mineralization, sulfur cycling and methane phase transitions. In the second part of the review, the structure, forcing functions and parameterization of published models of AOM in sediments are analyzed. The six-orders-of-magnitude range in rate constants reported for the widely used bimolecular rate law for AOM emphasizes the limited transferability of this simple kinetic model and, hence, the need for more comprehensive descriptions of the AOM reaction system. The derivation and implementation of more complete reaction models, however, are limited by the availability of observational data. In this context, we attempt to rank the relative benefits of potential experimental measurements that should help to better constrain AOM models. The last part of the review presents a compilation of reported depth-integrated AOM rates (ΣAOM). These rates reveal the extreme variability of ΣAOM in marine sediments. The model results are further used to derive quantitative relationships between ΣAOM and the magnitude of externally impressed fluid flow, as well as between ΣAOM and the depth of the sulfate-methane transition zone (SMTZ). This review contributes to an improved understanding of the global significance of the AOM process, and helps identify outstanding questions and future directions in the modeling of methane cycling and AOM in marine sediments.

  9. Mathematical model of the Tat-Rev regulation of HIV-1 replication in an activated cell predicts the existence of oscillatory dynamics in the synthesis of viral components

    PubMed Central

    2014-01-01

    analyzed alternative hypotheses for the re-cycling of the Rev proteins both in the cytoplasm and the nuclear pore complex. Conclusions The quantitative mathematical model of the Tat-Rev regulation of HIV-1 replication predicts the existence of oscillatory dynamics which depends on the efficacy of the Tat and TAR interaction as well as on the Rev-mediated transport processes. The biological relevance of the oscillatory regimes for the HIV-1 life cycle is discussed. PMID:25564443

  10. A Quantitative Model of Keyhole Instability Induced Porosity in Laser Welding of Titanium Alloy

    NASA Astrophysics Data System (ADS)

    Pang, Shengyong; Chen, Weidong; Wang, Wen

    2014-06-01

    Quantitative prediction of the porosity defects in deep penetration laser welding has generally been considered as a very challenging task. In this study, a quantitative model of porosity defects induced by keyhole instability in partial penetration CO2 laser welding of a titanium alloy is proposed. The three-dimensional keyhole instability, weld pool dynamics, and pore formation are determined by direct numerical simulation, and the results are compared to prior experimental results. It is shown that the simulated keyhole depth fluctuations could represent the variation trends in the number and average size of pores for the studied process conditions. Moreover, it is found that it is possible to use the predicted keyhole depth fluctuations as a quantitative measure of the average size of porosity. The results also suggest that due to the shadowing effect of keyhole wall humps, the rapid cooling of the surface of the keyhole tip before keyhole collapse could lead to a substantial decrease in vapor pressure inside the keyhole tip, which is suggested to be the mechanism by which shielding gas enters into the porosity.

  11. Four-Point Bending as a Method for Quantitatively Evaluating Spinal Arthrodesis in a Rat Model

    PubMed Central

    Robinson, Samuel T; Svet, Mark T; Kanim, Linda A; Metzger, Melodie F

    2015-01-01

    The most common method of evaluating the success (or failure) of rat spinal fusion procedures is manual palpation testing. Whereas manual palpation provides only a subjective binary answer (fused or not fused) regarding the success of a fusion surgery, mechanical testing can provide more quantitative data by assessing variations in strength among treatment groups. We here describe a mechanical testing method to quantitatively assess single-level spinal fusion in a rat model, to improve on the binary and subjective nature of manual palpation as an end point for fusion-related studies. We tested explanted lumbar segments from Sprague–Dawley rat spines after single-level posterolateral fusion procedures at L4–L5. Segments were classified as ‘not fused,’ ‘restricted motion,’ or ‘fused’ by using manual palpation testing. After thorough dissection and potting of the spine, 4-point bending in flexion then was applied to the L4–L5 motion segment, and stiffness was measured as the slope of the moment–displacement curve. Results demonstrated statistically significant differences in stiffness among all groups, which were consistent with preliminary grading according to manual palpation. In addition, the 4-point bending results provided quantitative information regarding the quality of the bony union formed and therefore enabled the comparison of fused specimens. Our results demonstrate that 4-point bending is a simple, reliable, and effective way to describe and compare results among rat spines after fusion surgery. PMID:25730756

  12. Synthetic cannabinoids: In silico prediction of the cannabinoid receptor 1 affinity by a quantitative structure-activity relationship model.

    PubMed

    Paulke, Alexander; Proschak, Ewgenij; Sommer, Kai; Achenbach, Janosch; Wunder, Cora; Toennes, Stefan W

    2016-03-14

    The number of new synthetic psychoactive compounds increase steadily. Among the group of these psychoactive compounds, the synthetic cannabinoids (SCBs) are most popular and serve as a substitute of herbal cannabis. More than 600 of these substances already exist. For some SCBs the in vitro cannabinoid receptor 1 (CB1) affinity is known, but for the majority it is unknown. A quantitative structure-activity relationship (QSAR) model was developed, which allows the determination of the SCBs affinity to CB1 (expressed as binding constant (Ki)) without reference substances. The chemically advance template search descriptor was used for vector representation of the compound structures. The similarity between two molecules was calculated using the Feature-Pair Distribution Similarity. The Ki values were calculated using the Inverse Distance Weighting method. The prediction model was validated using a cross validation procedure. The predicted Ki values of some new SCBs were in a range between 20 (considerably higher affinity to CB1 than THC) to 468 (considerably lower affinity to CB1 than THC). The present QSAR model can serve as a simple, fast and cheap tool to get a first hint of the biological activity of new synthetic cannabinoids or of other new psychoactive compounds. PMID:26795018

  13. Evolution of a fold-thrust belt deforming a unit with pre-existing linear asperities: Insights from analog models

    NASA Astrophysics Data System (ADS)

    Burberry, Caroline M.; Swiatlowski, Jerlyn L.

    2016-06-01

    Heterogeneity, whether geometric or rheologic, in crustal material undergoing compression affects the geometry of the structures produced. This study documents the thrust fault geometries produced when discrete linear asperities are introduced into an analog model, scaled to represent bulk upper crustal properties, and compressed. Varying obliquities of the asperities are used, relative to the imposed compression, and the resultant development of thrust fault traces and branch lines in map view is tracked. Once the model runs are completed, cross-sections are created and analyzed. The models show that asperities confined to the base layer promote the clustering of branch lines in the surface thrusts. Strong clustering in branch lines is also noted where several asperities are in close proximity or cross. Slight reverse-sense reactivation of asperities cut through the sedimentary sequence is noted in cross-section, where the asperity and the subsequent thrust belt interact. The model results are comparable to the situation in the Dinaric Alps, where pre-existing faults to the SW of the NE Adriatic Fault Zone contribute to the clustering of branch lines developed in the surface fold-thrust belt. These results can therefore be used to evaluate the evolution of other basement-involved fold-thrust belts worldwide.

  14. Quantitative functional MRI in a clinical orthotopic model of pancreatic cancer in immunocompetent Lewis rats

    PubMed Central

    Zhang, Zhuoli; Zheng, Linfeng; Li, Weiguo; Gordon, Andrew C; Huan, Yi; Shangguan, Junjie; Procissi, Daniel; Bentrem, David J; Larson, Andrew C

    2015-01-01

    Objective: To demonstrate feasibility of performing quantitative MRI measurements in an immuno-competent rat model of pancreatic cancer by comparing in vivo anatomic and quantitative imaging measurements to tumor dissemination observations and histologic assays at necropsy. Meterials and methods: Rat ductal pancreatic adenocarcinoma DSL-6A/C1 cell line and Lewis rats were used for these studies. 108 DSL-6A/C1 cells were injected subcutaneously into the right flank of donor rats. Donor tumors reaching 10 mm were excised, and 1 mm3 tumor fragments were implanted within recipient rat pancreas during mini-laparotomy. T1-weighted, T2-weighted, diffusion-weighted, and dynamic contrast-enhanced (DCE) MRI were performed using a Bruker 7.0T ClinScan. After MRI, all animals underwent autopsy. Primary tumor size was measured, and dissemination score was used to assess local invasion and distant metastasis. Primary tumor and all sites of metastases were harvested and fixed for H&E, Masson’s trichrome, and rat anti-CD34 staining. Trichrome slides were scanned and digitized for measurement of fibrotic tissue areas. Anti-CD34 slides were used for microvessel density (MVD) measurements. Results: Primary tumors, local invasion, and distant metastases were confirmed for all rats. No significant differences were found between in vivo MRI measurements (48.7 ± 5.3 mm) and ex vivo caliper measurements (43.6 ± 3.6 mm) of primary tumor sizes (p > .05). Spleen, liver, diaphragm, peritoneum, and abdominal wall metastases were observed on MRI but smaller lung, mediastinum, omen, and mesentery metastases were only observed at necropsy. Contrast uptake observed during DCE measurements was significantly greater in both primary and metastatic tumor tissues compared to skeletal muscle and normal liver tissues. Both primary and metastatic tumors were hyper-intense in T2-weighted images and hypo-intense in T1-weighted images, but no differences were found between quantitative T2 measurements in

  15. Development and evaluation of a model-based downscatter compensation method for quantitative I-131 SPECT

    PubMed Central

    Song, Na; Du, Yong; He, Bin; Frey, Eric C.

    2011-01-01

    Purpose: The radionuclide 131I has found widespread use in targeted radionuclide therapy (TRT), partly due to the fact that it emits photons that can be imaged to perform treatment planning or posttherapy dose verification as well as beta rays that are suitable for therapy. In both the treatment planning and dose verification applications, it is necessary to estimate the activity distribution in organs or tumors at several time points. In vivo estimates of the 131I activity distribution at each time point can be obtained from quantitative single-photon emission computed tomography (QSPECT) images and organ activity estimates can be obtained either from QSPECT images or quantification of planar projection data. However, in addition to the photon used for imaging, 131I decay results in emission of a number of other higher-energy photons with significant abundances. These higher-energy photons can scatter in the body, collimator, or detector and be counted in the 364 keV photopeak energy window, resulting in reduced image contrast and degraded quantitative accuracy; these photons are referred to as downscatter. The goal of this study was to develop and evaluate a model-based downscatter compensation method specifically designed for the compensation of high-energy photons emitted by 131I and detected in the imaging energy window. Methods: In the evaluation study, we used a Monte Carlo simulation (MCS) code that had previously been validated for other radionuclides. Thus, in preparation for the evaluation study, we first validated the code for 131I imaging simulation by comparison with experimental data. Next, we assessed the accuracy of the downscatter model by comparing downscatter estimates with MCS results. Finally, we combined the downscatter model with iterative reconstruction-based compensation for attenuation (A) and scatter (S) and the full (D) collimator-detector response of the 364 keV photons to form a comprehensive compensation method. We evaluated this

  16. Satellite contributions to the quantitative characterization of biomass burning for climate modeling

    NASA Astrophysics Data System (ADS)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-07-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  17. Development and Validation of Quantitative Structure-Activity Relationship Models for Compounds Acting on Serotoninergic Receptors

    PubMed Central

    Żydek, Grażyna; Brzezińska, Elżbieta

    2012-01-01

    A quantitative structure-activity relationship (QSAR) study has been made on 20 compounds with serotonin (5-HT) receptor affinity. Thin-layer chromatographic (TLC) data and physicochemical parameters were applied in this study. RP2 TLC 60F254 plates (silanized) impregnated with solutions of propionic acid, ethylbenzene, 4-ethylphenol, and propionamide (used as analogues of the key receptor amino acids) and their mixtures (denoted as S1–S7 biochromatographic models) were used in two developing phases as a model of drug-5-HT receptor interaction. The semiempirical method AM1 (HyperChem v. 7.0 program) and ACD/Labs v. 8.0 program were employed to calculate a set of physicochemical parameters for the investigated compounds. Correlation and multiple linear regression analysis were used to search for the best QSAR equations. The correlations obtained for the compounds studied represent their interactions with the proposed biochromatographic models. The good multivariate relationships (R2 = 0.78–0.84) obtained by means of regression analysis can be used for predicting the quantitative effect of biological activity of different compounds with 5-HT receptor affinity. “Leave-one-out” (LOO) and “leave-N-out” (LNO) cross-validation methods were used to judge the predictive power of final regression equations. PMID:22619602

  18. Quantitative Analysis and Modeling Probe Polarity Establishment in C. elegans Embryos

    PubMed Central

    Blanchoud, Simon; Busso, Coralie; Naef, Félix; Gönczy, Pierre

    2015-01-01

    Cell polarity underlies many aspects of metazoan development and homeostasis, and relies notably on a set of PAR proteins located at the cell cortex. How these proteins interact in space and time remains incompletely understood. We performed a quantitative assessment of polarity establishment in one-cell stage Caenorhabditis elegans embryos by combining time-lapse microscopy and image analysis. We used our extensive data set to challenge and further specify an extant mathematical model. Using likelihood-based calibration, we uncovered that cooperativity is required for both anterior and posterior PAR complexes. Moreover, we analyzed the dependence of polarity establishment on changes in size or temperature. The observed robustness of PAR domain dimensions in embryos of different sizes is in agreement with a model incorporating fixed protein concentrations and variations in embryo surface/volume ratio. In addition, we quantified the dynamics of polarity establishment over most of the viable temperatures range of C. elegans. Modeling of these data suggests that diffusion of PAR proteins is the process most affected by temperature changes, although cortical flows appear unaffected. Overall, our quantitative analytical framework provides insights into the dynamics of polarity establishment in a developing system. PMID:25692585

  19. Deriving models of customer satisfaction: A comparison of alternative quantitative approaches

    SciTech Connect

    Schultz, M.T.

    1994-11-01

    PECO Energy, like many other companies, measures customer satisfaction, and has gone through a discussion of how best to model the results. PECO Energy utilizes a model of customer satisfaction based upon multiple regression, where both independent and dependent variables are responses to survey questions using a fully anchored five point scale. In addition to multiple regression, there are a number of other multivariate procedures that can be used to develop a quantitative model of customer satisfaction. This paper compares and contrasts results obtained from standard multiple regression, multiple regression with dummy coding, discriminant function analysis, and logistic regression procedures. Findings suggest that each of these methods can yield satisfactory information regarding customer perception.

  20. Development of quantitative interspecies toxicity relationship modeling of chemicals to fish.

    PubMed

    Fatemi, M H; Mousa Shahroudi, E; Amini, Z

    2015-09-01

    In this work, quantitative interspecies-toxicity relationship methodologies were used to improve the prediction power of interspecies toxicity model. The most relevant descriptors selected by stepwise multiple linear regressions and toxicity of chemical to Daphnia magna were used to predict the toxicities of chemicals to fish. Modeling methods that were used for developing linear and nonlinear models were multiple linear regression (MLR), random forest (RF), artificial neural network (ANN) and support vector machine (SVM). The obtained results indicate the superiority of SVM model over other models. Robustness and reliability of the constructed SVM model were evaluated by using the leave-one-out cross-validation method (Q(2)=0.69, SPRESS=0.822) and Y-randomization test (R(2)=0.268 for 30 trail). Furthermore, the chemical applicability domains of these models were determined via leverage approach. The developed SVM model was used for the prediction of toxicity of 46 compounds that their experimental toxicities to a fish were not being reported earlier from their toxicities to D. magna and relevant molecular descriptors. PMID:26002421

  1. Quantitative Simulations of MST Visual Receptive Field Properties Using a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, J. A.

    1997-01-01

    We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.

  2. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    EPA Science Inventory

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  3. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    NASA Astrophysics Data System (ADS)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  4. Evaluation of the Use of Existing RELAP5-3D Models to Represent the Actinide Burner Test Reactor

    SciTech Connect

    C. B. Davis

    2007-02-01

    The RELAP5-3D code is being considered as a thermal-hydraulic system code to support the development of the sodium-cooled Actinide Burner Test Reactor as part of Global Nuclear Energy Partnership. An evaluation was performed to determine whether the control system could be used to simulate the effects of non-convective mechanisms of heat transport in the fluid that are not currently represented with internal code models, including axial and radial heat conduction in the fluid and subchannel mixing. The evaluation also determined the relative importance of axial and radial heat conduction and fluid mixing on peak cladding temperature for a wide range of steady conditions and during a representative loss-of-flow transient. The evaluation was performed using a RELAP5-3D model of a subassembly in the Experimental Breeder Reactor-II, which was used as a surrogate for the Actinide Burner Test Reactor. An evaluation was also performed to determine if the existing centrifugal pump model could be used to simulate the performance of electromagnetic pumps.

  5. Substrate Hydroxylation in Methane Monooxygenase: Quantitative Modeling via Mixed Quantum Mechanics/ Molecular Mechanics Techniques

    SciTech Connect

    Gherman, Benjamin F.; Lippard, Stephen J.; Friesner, Richard A.

    2005-01-26

    The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. Using broken-symmetry unrestricted density functional theory quantum mechanical (QM) methods in concert with mixed quantum mechanics/molecular mechanics (QM/MM) methods, the hydroxylation of methane and substituted methanes by intermediate Q in methane monooxygenase hydroxylase (MMOH) has been quantitatively modeled. This protocol allows the protein environment to be included throughout the calculations and its effects (electrostatic, van der Waals, strain) upon the reaction to be accurately evaluated. With the current results, recent kinetic data for CH₃X (X ) H, CH₃, OH, CN, NO₂) substrate hydroxylation in MMOH (Ambundo, E. A.; Friesner, R. A.; Lippard, S. J. J. Am. Chem. Soc. 2002, 124, 8770-8771) can be rationalized. Results for methane, which provide a quantitative test of the protocol, including a substantial kinetic isotope effect (KIE), are in reasonable agreement with experiment. Specific features of the interaction of each of the substrates with MMO are illuminated by the QM/MM modeling, and the resulting effects upon substrate binding are quantitatively incorporated into the calculations. The results as a whole point to the success of the QM/MM methodology and enhance our understanding of MMOH catalytic chemistry. We also identify systematic errors in the evaluation of the free energy of binding of the Michaelis complexes of the substrates, which most likely arise from inadequate sampling and/or the use of harmonic approximations to evaluate the entropy of the complex. More sophisticated sampling methods will be required to achieve greater accuracy in this aspect of the calculation.

  6. Quantitative MRI and ultrastructural examination of the cuprizone mouse model of demyelination.

    PubMed

    Thiessen, Jonathan D; Zhang, Yanbo; Zhang, Handi; Wang, Lingyan; Buist, Richard; Del Bigio, Marc R; Kong, Jiming; Li, Xin-Min; Martin, Melanie

    2013-11-01

    The cuprizone mouse model of demyelination was used to investigate the influence that white matter changes have on different magnetic resonance imaging results. In vivo T2 -weighted and magnetization transfer images (MTIs) were acquired weekly in control (n = 5) and cuprizone-fed (n = 5) mice, with significant increases in signal intensity in T2 -weighted images (p < 0.001) and lower magnetization transfer ratio (p < 0.001) in the corpus callosum of the cuprizone-fed mice starting at 3 weeks and peaking at 4 and 5 weeks, respectively. Diffusion tensor imaging (DTI), quantitative MTI (qMTI), and T1/T2 measurements were used to analyze freshly excised tissue after 6 weeks of cuprizone administration. In multicomponent T2 analysis with 10 ms echo spacing, there was no visible myelin water component associated with the short T2 value. Quantitative MTI metrics showed significant differences in the corpus callosum and external capsule of the cuprizone-fed mice, similar to previous studies of multiple sclerosis in humans and animal models of demyelination. Fractional anisotropy was significantly lower and mean, axial, and radial diffusivity were significantly higher in the cuprizone-fed mice. Cellular distributions measured in electron micrographs of the corpus callosum correlated strongly to several different quantitative MRI metrics. The largest Spearman correlation coefficient varied depending on cellular type: T1 versus the myelinated axon fraction (ρ = -0.90), the bound pool fraction (ƒ) versus the myelin sheath fraction (ρ = 0.93), and axial diffusivity versus the non-myelinated cell fraction (ρ = 0.92). Using Pearson's correlation coefficient, ƒ was strongly correlated to the myelin sheath fraction (r = 0.98) with a linear equation predicting myelin content (5.37ƒ - 0.25). Of the calculated MRI metrics, ƒ was the strongest indicator of myelin content, while longitudinal relaxation rates and diffusivity measurements were the strongest indicators of changes in

  7. Satellite Contributions to the Quantitative Characterization of Biomass Burning for Climate Modeling

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-01-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  8. Numerical modeling of the seismic response of a large pre-existing landslide in the Marmara region

    NASA Astrophysics Data System (ADS)

    Bourdeau, Céline; Lenti, Luca; Martino, Salvatore

    2015-04-01

    Turkey is one of the geologically most active regions of Europe prone to natural hazards in particular earthquakes and landslides. Detailed seismological studies show that a catastrophic event is now expected in the Marmara region along the North Anatolian Fault Zone (NAFZ). On the shores of the Marmara sea, about 30km East of Istanbul and 15km North from the NAFZ, urbanization is fastly growing despite the presence of pre-existing large landslides. Whether such landslides could be reactivated under seismic shaking is a key question. In the framework of the MARsite European project, we selected one of the most critical landslides namely the Büyükçekmece landslide in order to assess its local seismic response. Based on detailed geophysical and geotechnical field investigations, a high-resolution engineering-geological model of the landslide slope was reconstructed. A numerical modeling was carried out on a longitudinal cross section of this landslide with a 2D finite difference code FLAC in order to assess the local seismic response of the slope and to evaluate the consistency of conditions suitable for the earthquake-induced reactivation of the landslide. The obtained ground-motion amplification pattern along the slope surface is very complex and is strongly influenced by properties changes between the pre-existing landslide mass and the surrounding material. Further comparisons of 2D versus 1D ground-motion amplifications on the one hand and 2D versus topographic site effects on the other hand will shed light on the parameters controlling the spatial variations of ground-motion amplifications along the slope surface.

  9. Development of quantitative atomic modeling for tungsten transport study using LHD plasma with tungsten pellet injection

    NASA Astrophysics Data System (ADS)

    Murakami, I.; Sakaue, H. A.; Suzuki, C.; Kato, D.; Goto, M.; Tamura, N.; Sudo, S.; Morita, S.

    2015-09-01

    Quantitative tungsten study with reliable atomic modeling is important for successful achievement of ITER and fusion reactors. We have developed tungsten atomic modeling for understanding the tungsten behavior in fusion plasmas. The modeling is applied to the analysis of tungsten spectra observed from plasmas of the large helical device (LHD) with tungsten pellet injection. We found that extreme ultraviolet (EUV) emission of W24+ to W33+ ions at 1.5-3.5 nm are sensitive to electron temperature and useful to examine the tungsten behavior in edge plasmas. We can reproduce measured EUV spectra at 1.5-3.5 nm by calculated spectra with the tungsten atomic model and obtain charge state distributions of tungsten ions in LHD plasmas at different temperatures around 1 keV. Our model is applied to calculate the unresolved transition array (UTA) seen at 4.5-7 nm tungsten spectra. We analyze the effect of configuration interaction on population kinetics related to the UTA structure in detail and find the importance of two-electron-one-photon transitions between 4p54dn+1- 4p64dn-14f. Radiation power rate of tungsten due to line emissions is also estimated with the model and is consistent with other models within factor 2.

  10. Quantitative Validation of a Human Body Finite Element Model Using Rigid Body Impacts.

    PubMed

    Vavalle, Nicholas A; Davis, Matthew L; Stitzel, Joel D; Gayzik, F Scott

    2015-09-01

    Validation is a critical step in finite element model (FEM) development. This study focuses on the validation of the Global Human Body Models Consortium full body average male occupant FEM in five localized loading regimes-a chest impact, a shoulder impact, a thoracoabdominal impact, an abdominal impact, and a pelvic impact. Force and deflection outputs from the model were compared to experimental traces and corridors scaled to the 50th percentile male. Predicted fractures and injury severity measures were compared to evaluate the model's injury prediction capabilities. The methods of ISO/TS 18571 were used to quantitatively assess the fit of model outputs to experimental force and deflection traces. The model produced peak chest, shoulder, thoracoabdominal, abdominal, and pelvis forces of 4.8, 3.3, 4.5, 5.1, and 13.0 kN compared to 4.3, 3.2, 4.0, 4.0, and 10.3 kN in the experiments, respectively. The model predicted rib and pelvic fractures related to Abbreviated Injury Scale scores within the ranges found experimentally all cases except the abdominal impact. ISO/TS 18571 scores for the impacts studied had a mean score of 0.73 with a range of 0.57-0.83. Well-validated FEMs are important tools used by engineers in advancing occupant safety. PMID:25739950

  11. Mathematical Modelling of a Brain Tumour Initiation and Early Development: A Coupled Model of Glioblastoma Growth, Pre-Existing Vessel Co-Option, Angiogenesis and Blood Perfusion

    PubMed Central

    Cai, Yan; Wu, Jie; Li, Zhiyong; Long, Quan

    2016-01-01

    We propose a coupled mathematical modelling system to investigate glioblastoma growth in response to dynamic changes in chemical and haemodynamic microenvironments caused by pre-existing vessel co-option, remodelling, collapse and angiogenesis. A typical tree-like architecture network with different orders for vessel diameter is designed to model pre-existing vasculature in host tissue. The chemical substances including oxygen, vascular endothelial growth factor, extra-cellular matrix and matrix degradation enzymes are calculated based on the haemodynamic environment which is obtained by coupled modelling of intravascular blood flow with interstitial fluid flow. The haemodynamic changes, including vessel diameter and permeability, are introduced to reflect a series of pathological characteristics of abnormal tumour vessels including vessel dilation, leakage, angiogenesis, regression and collapse. Migrating cells are included as a new phenotype to describe the migration behaviour of malignant tumour cells. The simulation focuses on the avascular phase of tumour development and stops at an early phase of angiogenesis. The model is able to demonstrate the main features of glioblastoma growth in this phase such as the formation of pseudopalisades, cell migration along the host vessels, the pre-existing vasculature co-option, angiogenesis and remodelling. The model also enables us to examine the influence of initial conditions and local environment on the early phase of glioblastoma growth. PMID:26934465

  12. Combining quantitative and qualitative measures of uncertainty in model-based environmental assessment: the NUSAP system.

    PubMed

    van der Sluijs, Jeroen P; Craye, Matthieu; Funtowicz, Silvio; Kloprogge, Penny; Ravetz, Jerry; Risbey, James

    2005-04-01

    This article discusses recent experiences with the Numeral Unit Spread Assessment Pedigree (NUSAP) system for multidimensional uncertainty assessment, based on four case studies that vary in complexity. We show that the NUSAP method is applicable not only to relatively simple calculation schemes but also to complex models in a meaningful way and that NUSAP is useful to assess not only parameter uncertainty but also (model) assumptions. A diagnostic diagram can be used to synthesize results of quantitative analysis of parameter sensitivity and qualitative review (pedigree analysis) of parameter strength. It provides an analytic tool to prioritize uncertainties according to quantitative and qualitative insights in the limitations of available knowledge. We show that extension of the pedigree scheme to include societal dimensions of uncertainty, such as problem framing and value-laden assumptions, further promotes reflexivity and collective learning. When used in a deliberative setting, NUSAP pedigree assessment has the potential to foster a deeper social debate and a negotiated management of complex environmental problems. PMID:15876219

  13. Influence of mom and dad: quantitative genetic models for maternal effects and genomic imprinting.

    PubMed

    Santure, Anna W; Spencer, Hamish G

    2006-08-01

    The expression of an imprinted gene is dependent on the sex of the parent it was inherited from, and as a result reciprocal heterozygotes may display different phenotypes. In contrast, maternal genetic terms arise when the phenotype of an offspring is influenced by the phenotype of its mother beyond the direct inheritance of alleles. Both maternal effects and imprinting may contribute to resemblance between offspring of the same mother. We demonstrate that two standard quantitative genetic models for deriving breeding values, population variances and covariances between relatives, are not equivalent when maternal genetic effects and imprinting are acting. Maternal and imprinting effects introduce both sex-dependent and generation-dependent effects that result in differences in the way additive and dominance effects are defined for the two approaches. We use a simple example to demonstrate that both imprinting and maternal genetic effects add extra terms to covariances between relatives and that model misspecification may over- or underestimate true covariances or lead to extremely variable parameter estimation. Thus, an understanding of various forms of parental effects is essential in correctly estimating quantitative genetic variance components. PMID:16751674

  14. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    PubMed Central

    Heo, Hwan; Lee, Won Oh; Shin, Kwang Yong; Park, Kang Ryoung

    2014-01-01

    We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user's gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors. PMID:24834910

  15. A Quantitative Comparison of the Behavior of Human Ventricular Cardiac Electrophysiology Models in Tissue

    PubMed Central

    Elshrif, Mohamed M.; Cherry, Elizabeth M.

    2014-01-01

    indicating areas where existing models disagree, our findings suggest avenues for further experimental work. PMID:24416228

  16. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    NASA Astrophysics Data System (ADS)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  17. Full-Process Computer Model of Magnetron Sputter, Part I: Test Existing State-of-Art Components

    SciTech Connect

    Walton, C C; Gilmer, G H; Wemhoff, A P; Zepeda-Ruiz, L A

    2007-09-26

    This work is part of a larger project to develop a modeling capability for magnetron sputter deposition. The process is divided into four steps: plasma transport, target sputter, neutral gas and sputtered atom transport, and film growth, shown schematically in Fig. 1. Each of these is simulated separately in this Part 1 of the project, which is jointly funded between CMLS and Engineering. The Engineering portion is the plasma modeling, in step 1. The plasma modeling was performed using the Object-Oriented Particle-In-Cell code (OOPIC) from UC Berkeley [1]. Figure 2 shows the electron density in the simulated region, using magnetic field strength input from experiments by Bohlmark [2], where a scale of 1% is used. Figures 3 and 4 depict the magnetic field components that were generated using two-dimensional linear interpolation of Bohlmark's experimental data. The goal of the overall modeling tool is to understand, and later predict, relationships between parameters of film deposition we can change (such as gas pressure, gun voltage, and target-substrate distance) and key properties of the results (such as film stress, density, and stoichiometry.) The simulation must use existing codes, either open-source or low-cost, not develop new codes. In part 1 (FY07) we identified and tested the best available code for each process step, then determined if it can cover the size and time scales we need in reasonable computation times. We also had to determine if the process steps are sufficiently decoupled that they can be treated separately, and identify any research-level issues preventing practical use of these codes. Part 2 will consider whether the codes can be (or need to be) made to talk to each other and integrated into a whole.

  18. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast

    PubMed Central

    Pang, Wei; Coghill, George M.

    2015-01-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377

  19. How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies

    PubMed Central

    Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A

    2013-01-01

    A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026

  20. Quantitative fully 3D PET via model-based scatter correction

    SciTech Connect

    Ollinger, J.M.

    1994-05-01

    We have investigated the quantitative accuracy of fully 3D PET using model-based scatter correction by measuring the half-life of Ga-68 in the presence of scatter from F-18. The inner chamber of a Data Spectrum cardiac phantom was filled with 18.5 MBq of Ga-68. The outer chamber was filled with an equivalent amount of F-18. The cardiac phantom was placed in a 22x30.5 cm elliptical phantom containing anthropomorphic lung inserts filled with a water-Styrofoam mixture. Ten frames of dynamic data were collected over 13.6 hours on Siemens-CTI 953B scanner with the septa retracted. The data were corrected using model-based scatter correction, which uses the emission images, transmission images and an accurate physical model to directly calculate the scatter distribution. Both uncorrected and corrected data were reconstructed using the Promis algorithm. The scatter correction required 4.3% of the total reconstruction time. The scatter fraction in a small volume of interest in the center of the inner chamber of the cardiac insert rose from 4.0% in the first interval to 46.4% in the last interval as the ratio of F-18 activity to Ga-68 activity rose from 1:1 to 33:1. Fitting a single exponential to the last three data points yields estimates of the half-life of Ga-68 of 77.01 minutes and 68.79 minutes for uncorrected and corrected data respectively. Thus, scatter correction reduces the error from 13.3% to 1.2%. This suggests that model-based scatter correction is accurate in the heterogeneous attenuating medium found in the chest, making possible quantitative, fully 3D PET in the body.

  1. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  2. A Novel Animal Model of Partial Optic Nerve Transection Established Using an Optic Nerve Quantitative Amputator

    PubMed Central

    Wang, Xu; Li, Ying; He, Yan; Liang, Hong-Sheng; Liu, En-Zhong

    2012-01-01

    Background Research into retinal ganglion cell (RGC) degeneration and neuroprotection after optic nerve injury has received considerable attention and the establishment of simple and effective animal models is of critical importance for future progress. Methodology/Principal Findings In the present study, the optic nerves of Wistar rats were semi-transected selectively with a novel optic nerve quantitative amputator. The variation in RGC density was observed with retro-labeled fluorogold at different time points after nerve injury. The densities of surviving RGCs in the experimental eyes at different time points were 1113.69±188.83 RGC/mm2 (the survival rate was 63.81% compared with the contralateral eye of the same animal) 1 week post surgery; 748.22±134.75 /mm2 (46.16% survival rate) 2 weeks post surgery; 505.03±118.67 /mm2 (30.52% survival rate) 4 weeks post surgery; 436.86±76.36 /mm2 (24.01% survival rate) 8 weeks post surgery; and 378.20±66.74 /mm2 (20.30% survival rate) 12 weeks post surgery. Simultaneously, we also measured the axonal distribution of optic nerve fibers; the latency and amplitude of pattern visual evoke potentials (P-VEP); and the variation in pupil diameter response to pupillary light reflex. All of these observations and profiles were consistent with post injury variation characteristics of the optic nerve. These results indicate that we effectively simulated the pathological process of primary and secondary injury after optic nerve injury. Conclusions/Significance The present quantitative transection optic nerve injury model has increased reproducibility, effectiveness and uniformity. This model is an ideal animal model to provide a foundation for researching new treatments for nerve repair after optic nerve and/or central nerve injury. PMID:22973439

  3. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    SciTech Connect

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  4. Multi-epitope Models Explain How Pre-existing Antibodies Affect the Generation of Broadly Protective Responses to Influenza

    PubMed Central

    Zarnitsyna, Veronika I.; Lavine, Jennie; Ellebedy, Ali; Ahmed, Rafi; Antia, Rustom

    2016-01-01

    The development of next-generation influenza vaccines that elicit strain-transcendent immunity against both seasonal and pandemic viruses is a key public health goal. Targeting the evolutionarily conserved epitopes on the stem of influenza’s major surface molecule, hemagglutinin, is an appealing prospect, and novel vaccine formulations show promising results in animal model systems. However, studies in humans indicate that natural infection and vaccination result in limited boosting of antibodies to the stem of HA, and the level of stem-specific antibody elicited is insufficient to provide broad strain-transcendent immunity. Here, we use mathematical models of the humoral immune response to explore how pre-existing immunity affects the ability of vaccines to boost antibodies to the head and stem of HA in humans, and, in particular, how it leads to the apparent lack of boosting of broadly cross-reactive antibodies to the stem epitopes. We consider hypotheses where binding of antibody to an epitope: (i) results in more rapid clearance of the antigen; (ii) leads to the formation of antigen-antibody complexes which inhibit B cell activation through Fcγ receptor-mediated mechanism; and (iii) masks the epitope and prevents the stimulation and proliferation of specific B cells. We find that only epitope masking but not the former two mechanisms to be key in recapitulating patterns in data. We discuss the ramifications of our findings for the development of vaccines against both seasonal and pandemic influenza. PMID:27336297

  5. A Second-Generation Device for Automated Training and Quantitative Behavior Analyses of Molecularly-Tractable Model Organisms

    PubMed Central

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L.; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  6. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    PubMed

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  7. Evaluation of New Zealand’s High-Seas Bottom Trawl Closures Using Predictive Habitat Models and Quantitative Risk Assessment

    PubMed Central

    Penney, Andrew J.; Guinotte, John M.

    2013-01-01

    United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas. PMID:24358162

  8. Evaluation of New Zealand's high-seas bottom trawl closures using predictive habitat models and quantitative risk assessment.

    PubMed

    Penney, Andrew J; Guinotte, John M

    2013-01-01

    United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas. PMID:24358162

  9. A study to modify, extend, and verify, an existing model of interactive-constructivist school science teaching

    NASA Astrophysics Data System (ADS)

    Numedahl, Paul Joseph

    The purpose of this study was to gain an understanding of the effects an interactive-constructive teaching and learning approach, the use of children's literature in science teaching and parental involvement in elementary school science had on student achievement in and attitudes toward science. The study was done in the context of Science PALS, a professional development program for inservice teachers. An existing model for interactive-constructive elementary science was modified to include five model variables; student achievement, student attitudes, teacher perceptions, teacher performance, and student perceptions. Data were collected from a sample of 12 teachers and 260 third and fourth grade students. Data analysis included two components, (1) the examination of relationships between teacher performance, teacher perceptions, student achievement and attitudes, and (2) the verification of a model using path analysis. Results showed a significant correlation between teacher perceptions and student attitude. However, only one model path was significant; thus, the model could not be verified. Further examination of the significant model path was completed. Study findings included: (1) Constructivist notions of teaching and learning may cause changes in the traditional role relationship between teachers and students leading to negative student attitudes. (2) Children who perceive parental interest toward science education are likely to have a positive attitude toward science learning, increased self-confidence in science and possess accurate ideas concerning the nature of science. (3) Students who perceive science instruction as relevant are likely to possess a positive attitude toward science learning, increased self-confidence in science, and possess accurate ideas concerning the nature of science. (4) Students who perceive their classroom as aligning with constructivist principles are likely to possess a positive attitude toward science, an increased self

  10. Concepts and challenges in quantitative pharmacology and model-based drug development.

    PubMed

    Zhang, Liping; Pfister, Marc; Meibohm, Bernd

    2008-12-01

    Model-based drug development (MBDD) has been recognized as a concept to improve the efficiency of drug development. The acceptance of MBDD from regulatory agencies, industry, and academia has been growing, yet today's drug development practice is still distinctly distant from MBDD. This manuscript is aimed at clarifying the concept of MBDD and proposing practical approaches for implementing MBDD in the pharmaceutical industry. The following concepts are defined and distinguished: PK-PD modeling, exposure-response modeling, pharmacometrics, quantitative pharmacology, and MBDD. MBDD is viewed as a paradigm and a mindset in which models constitute the instruments and aims of drug development efforts. MBDD covers the whole spectrum of the drug development process instead of being limited to a certain type of modeling technique or application area. The implementation of MBDD requires pharmaceutical companies to foster innovation and make changes at three levels: (1) to establish mindsets that are willing to get acquainted with MBDD, (2) to align processes that are adaptive to the requirements of MBDD, and (3) to create a closely collaborating organization in which all members play a role in MBDD. Pharmaceutical companies that are able to embrace the changes MBDD poses will likely be able to improve their success rate in drug development, and the beneficiaries will ultimately be the patients in need. PMID:19003542

  11. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015

    PubMed Central

    Sobkowicz, Pawel

    2016-01-01

    We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O) approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions)—which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be ‘invaded’ by a newcomer third party very quickly, while the second remains immune to such invasion. PMID:27171226

  12. Quantitative Structure – Property Relationship Modeling of Remote Liposome Loading Of Drugs

    PubMed Central

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-01-01

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a dataset including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and five-fold external validation. The external prediction accuracy for binary models was as high as 91–96%; for continuous models the mean coefficient R2 for regression between predicted versus observed values was 0.76–0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. PMID:22154932

  13. Quantitative multi-agent models for simulating protein release from PLGA bioerodible nano- and microspheres.

    PubMed

    Barat, Ana; Crane, Martin; Ruskin, Heather J

    2008-09-29

    Using poly(lactide-co-glycolide) (PLGA) particles for drug encapsulation and delivery has recently gained considerable popularity for a number of reasons. An advantage in one sense, but a drawback of PLGA use in another, is that drug delivery systems made of this material can provide a wide range of dissolution profiles, due to their internal structure and properties related to particles' manufacture. The advantages of enriching particulate drug design experimentation with computer models, are evident with simulations used to predict and optimize design, as well as indicate choice of best manufacturing parameters. In the present work, we seek to understand the phenomena observed for PLGA micro- and nanospheres, through Cellular Automata (CA) agent-based Monte Carlo (MC) models. Systems are studied both over large temporal scales (capturing slow erosion of PLGA) and for various spatial configurations (capturing initial as well as dynamic morphology). The major strength of this multi-agent approach is to observe dissolution directly, by monitoring the emergent behaviour: the dissolution profile manifested, as a sphere erodes. Different problematic aspects of the modelling process are discussed in details in this paper. The models were tested on experimental data from literature, demonstrating very good performance. Quantitative discussion is provided throughout the text in order to make a demonstration of the use in practice of the proposed model. PMID:18436414

  14. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015.

    PubMed

    Sobkowicz, Pawel

    2016-01-01

    We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O) approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions)-which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be 'invaded' by a newcomer third party very quickly, while the second remains immune to such invasion. PMID:27171226

  15. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  16. In vivo quantitative bioluminescence tomography using heterogeneous and homogeneous mouse models

    PubMed Central

    Liu, Junting; Wang, Yabin; Qu, Xiaochao; Li, Xiangsi; Ma, Xiaopeng; Han, Runqiang; Hu, Zhenhua; Chen, Xueli; Sun, Dongdong; Zhang, Rongqing; Chen, Duofang; Chen, Dan; Chen, Xiaoyuan; Liang, Jimin; Cao, Feng; Tian, Jie

    2010-01-01

    Bioluminescence tomography (BLT) is a new optical molecular imaging modality, which can monitor both physiological and pathological processes by using bioluminescent light-emitting probes in small living animal. Especially, this technology possesses great potential in drug development, early detection, and therapy monitoring in preclinical settings. In the present study, we developed a dual modality BLT prototype system with Micro-computed tomography (MicroCT) registration approach, and improved the quantitative reconstruction algorithm based on adaptive hp finite element method (hp-FEM). Detailed comparisons of source reconstruction between the heterogeneous and homogeneous mouse models were performed. The models include mice with implanted luminescence source and tumor-bearing mice with firefly luciferase report gene. Our data suggest that the reconstruction based on heterogeneous mouse model is more accurate in localization and quantification than the homogeneous mouse model with appropriate optical parameters and that BLT allows super-early tumor detection in vivo based on tomographic reconstruction of heterogeneous mouse model signal. PMID:20588440

  17. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases.

    PubMed

    Hollingsworth, T Déirdre; Adams, Emily R; Anderson, Roy M; Atkins, Katherine; Bartsch, Sarah; Basáñez, María-Gloria; Behrend, Matthew; Blok, David J; Chapman, Lloyd A C; Coffeng, Luc; Courtenay, Orin; Crump, Ron E; de Vlas, Sake J; Dobson, Andy; Dyson, Louise; Farkas, Hajnal; Galvani, Alison P; Gambhir, Manoj; Gurarie, David; Irvine, Michael A; Jervis, Sarah; Keeling, Matt J; Kelly-Hope, Louise; King, Charles; Lee, Bruce Y; Le Rutte, Epke A; Lietman, Thomas M; Ndeffo-Mbah, Martial; Medley, Graham F; Michael, Edwin; Pandey, Abhishek; Peterson, Jennifer K; Pinsent, Amy; Porco, Travis C; Richardus, Jan Hendrik; Reimer, Lisa; Rock, Kat S; Singh, Brajendra K; Stolk, Wilma; Swaminathan, Subramanian; Torr, Steve J; Townsend, Jeffrey; Truscott, James; Walker, Martin; Zoueva, Alexandra

    2015-01-01

    Quantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an overview of a collection of novel model-based analyses which aim to address key questions on the dynamics of transmission and control of nine NTDs: Chagas disease, visceral leishmaniasis, human African trypanosomiasis, leprosy, soil-transmitted helminths, schistosomiasis, lymphatic filariasis, onchocerciasis and trachoma. Several common themes resonate throughout these analyses, including: the importance of epidemiological setting on the success of interventions; targeting groups who are at highest risk of infection or re-infection; and reaching populations who are not accessing interventions and may act as a reservoir for infection,. The results also highlight the challenge of maintaining elimination 'as a public health problem' when true elimination is not reached. The models elucidate the factors that may be contributing most to persistence of disease and discuss the requirements for eventually achieving true elimination, if that is possible. Overall this collection presents new analyses to inform current control initiatives. These papers form a base from which further development of the models and more rigorous validation against a variety of datasets can help to give more detailed advice. At the moment, the models' predictions are being considered as the world prepares for a final push towards control or elimination of neglected tropical diseases by 2020. PMID:26652272

  18. Theoretical model-based quantitative optimisation of numerical modelling for eddy current NDT

    NASA Astrophysics Data System (ADS)

    Yu, Yating; Li, Xinhua; Simm, Anthony; Tian, Guiyun

    2011-06-01

    Eddy current (EC) nondestructive testing (NDT) is one of the most widely used NDT methods. Numerical modelling of NDT methods has been used as an important investigative approach alongside experimental and theoretical studies. This paper investigates the set-up of numerical modelling using finite-element method in terms of the optimal selection of element mesh size in different regions within the model based on theoretical analysis of EC NDT. The modelling set-up is refined and evaluated through numerical simulation, balancing both computation time and simulation accuracy. A case study in the optimisation of the modelling set-up of the EC NDT system with a cylindrical probe coil is carried out to verify the proposed optimisation approach. Here, the mesh size of the simulation model is set based on the geometries of the coil and the magnetic sensor, as well as on the skin depth in the sample; so the optimised modelling set-up can be useful even when the geometry of EC system, the excitation frequency or the pulsed width is changed in multi-frequency EC, sweep-frequency EC or system and pulsed EC. Furthermore, this optimisation approach can be used to improve the trade-off between accuracy and the computation time in other more complex EC NDT simulations.

  19. Quantitative saltwater modeling for validation of sub-grid scale LES turbulent mixing and transport models for fire

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro; Marshall, Andre; Gollner, Michael

    2015-11-01

    A quantitative understanding of turbulent mixing and transport in buoyant flows is indispensable for accurate modeling of combustion, fire dynamics and smoke transport used in both fire safety design and investigation. This study describes the turbulent mixing behavior of scaled, unconfined plumes using a quantitative saltwater modeling technique. An analysis of density difference turbulent fluctuations, captured as the collected images scale down in resolution, allows for the determination of the largest dimension over which LES averaging should be performed. This is important as LES models must assume a distribution for sub-grid scale mixing, such as the ?-PDF distribution. We showed that there is a loss of fidelity in resolving the flow for a cell size above 0 . 54D* ; where D* is a characteristic length scale for the plume. Such a point represents the threshold above which the fluctuations start to monotonically grow. Turbulence statistics were also analyzed in terms of span-wise intermittency and time and space correlation coefficients. An unexpected condition for the core of the plume, where a substantial amount of ambient fluid (fresh water) is found, and the mixing process under buoyant conditions were found depending on the resolution of measurements used.

  20. Tracer kinetic model for quantitative imaging of thymidine ultilization using [C-11] thymidine and PET

    SciTech Connect

    Mankoff, D.A.; Shields, A.F.; Lee, T.T.

    1994-05-01

    2-[C-11]thymidine, a marker of thymidine incorporation into DNA, is a PET tracer for assessing tumor proliferation. Quantitation of thymidine images is complicated by the presence of C-11 labeled metabolites, which include thymidine degradation products such as thymine, as well as labeled carbon dioxide (CO{sub 2}). We have therefore formulated and analyzed a compartmental model of tracer and metabolite distribution for the estimation of the thymidine incorporation rate (TIR), which is closely tied to the DNA synthetic rate. During [C-11]thymidine studies, the activities of intact thymidine (Tdr), labeled CO{sub 2} (CO{sub 2}), and labeled non-carbon dioxide metabolites (Met) are measured from blood samples. The model uses these blood time-activity curves as the inputs to three separate sets of compartments representing tissue Tdr, Met, and CO{sub 2}. There are 9 parameters to be estimated by optimization of the model, given the three input functions and a tissue time-activity curve obtained from PET images taken over the 60 minutes following injection. The TIR is estimated from the rate constants for transfer between the plasma and the Tdr tissue compartments. To simplify parameter estimation, we have determined through sensitivity analysis and simulations that 4 of the parameters can be fixed to physiological reasonable values without overly biasing the estimate of the TIR. The remaining 5 parameters, including those necessary to estimate the TIR, can be floated in the optimization and reliably determined. Simulations show that errors in the assumed values for the fixed parameters lead to worst-case errors in the TIR estimate on the order of 25-30%. We therefore conclude that quantitative imaging of tumor proliferation with [C-11]thymidine is feasible and may be advantageous in tumor imaging, particularly following the response of tumors to therapy.

  1. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    NASA Technical Reports Server (NTRS)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  2. Quantitative modeling of the reaction/diffusion kinetics of two-chemistry photopolymers

    NASA Astrophysics Data System (ADS)

    Kowalski, Benjamin Andrew

    Optically driven diffusion in photopolymers is an appealing material platform for a broad range of applications, in which the recorded refractive index patterns serve either as images (e.g. data storage, display holography) or as optical elements (e.g. custom GRIN components, integrated optical devices). A quantitative understanding of the reaction/diffusion kinetics is difficult to obtain directly, but is nevertheless necessary in order to fully exploit the wide array of design freedoms in these materials. A general strategy for characterizing these kinetics is proposed, in which key processes are decoupled and independently measured. This strategy enables prediction of a material's potential refractive index change, solely on the basis of its chemical components. The degree to which a material does not reach this potential reveals the fraction of monomer that has participated in unwanted reactions, reducing spatial resolution and dynamic range. This approach is demonstrated for a model material similar to commercial media, achieving quantitative predictions of index response over three orders of exposure dose (~1 to ~103 mJ cm-2) and three orders of feature size (0.35 to 500 microns). The resulting insights enable guided, rational design of new material formulations with demonstrated performance improvement.

  3. Climate change and dengue: a critical and systematic review of quantitative modelling approaches

    PubMed Central

    2014-01-01

    Background Many studies have found associations between climatic conditions and dengue transmission. However, there is a debate about the future impacts of climate change on dengue transmission. This paper reviewed epidemiological evidence on the relationship between climate and dengue with a focus on quantitative methods for assessing the potential impacts of climate change on global dengue transmission. Methods A literature search was conducted in October 2012, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search focused on peer-reviewed journal articles published in English from January 1991 through October 2012. Results Sixteen studies met the inclusion criteria and most studies showed that the transmission of dengue is highly sensitive to climatic conditions, especially temperature, rainfall and relative humidity. Studies on the potential impacts of climate change on dengue indicate increased climatic suitability for transmission and an expansion of the geographic regions at risk during this century. A variety of quantitative modelling approaches were used in the studies. Several key methodological issues and current knowledge gaps were identified through this review. Conclusions It is important to assemble spatio-temporal patterns of dengue transmission compatible with long-term data on climate and other socio-ecological changes and this would advance projections of dengue risks associated with climate change. PMID:24669859

  4. Geochronological models based on quantitative signal processing of chemical stratigraphic data

    SciTech Connect

    Williams, D.F.; Trainor, D.M.; Lerche, I.

    1989-03-01

    Chemical stratigraphy is becoming an increasingly important tool in chronostratigraphic interpretations of exploration wells. Chemical stratigraphy data are well suited to modeling in the time and frequency domains with quantitative signal processing and data-dependent filtering techniques, such as power spectral analysis, autocorrelation, cross-correlation, cross power spectral analysis, phase-sensitive detection, and matched filters. Properly analyzed quantitatively, chemical stratigraphy data provide an ideal framework for stratigraphic correlation of thick sedimentary sections with a high degree of resolution, chronostratigraphic interpretations with a high degree of reliability, as well as mapping and timing of diagenetic trends and gradients. In this presentation, the authors use a combination of these techniques to compare several recently developed types of chemical stratigraphies. One type record is based on global changes in the stable isotopic composition of seawater. The other type records are based on composite stable isotope and geochemical records for Pliocene-Pleistocene exploration wells from the northwestern Gulf of Mexico. Coherency analysis was used to subtract the global portion of the signal from the Gulf sections due to freshwater discharge vents, paleotemperature changes, diagenesis, and hydrocarbon migration.

  5. Quantitative models of sediment generation and provenance: State of the art and future developments

    NASA Astrophysics Data System (ADS)

    Weltje, Gert Jan

    2012-12-01

    An overview of quantitative approaches to analysis and modelling of sediment generation and provenance is presented, with an emphasis on major framework components as determined by means of petrographic techniques. Conceptual models of sediment provenance are shown to be consistent with two classes of numerical-statistical models, i.e. linear mixing models and compositional linear models. These cannot be placed within a common mathematical framework, because the former requires that sediment composition is expressed in terms of proportions, whereas the latter requires that sediment composition is expressed in terms of log-ratios of proportions. Additivity of proportions, a fundamental assumption in linear mixing models, cannot be readily expressed in log-ratio terms. Linear mixing models may be used to describe compositional variability in terms of physical and conceptual (un)mixing. Models of physical (un)mixing are appropriate for describing compositional variation within transport-invariant subpopulations of grains as a result of varying rates of supply of detritus from multiple sources. Conceptual (un)mixing governs the relations among chemical, mineralogical and petrographic characteristics of sediments, which represent different descriptive levels within a compositional hierarchy. Compositional linear process models may be used to describe compositional and/or textural evolution resulting from selective modifications induced by sediment transport, as well as chemical and mechanical weathering. Current approaches to modelling of surface processes treat the coupled evolution of source areas and sedimentary basins in terms of bulk mass transfer only, and do not take into account compositional and textural sediment properties. Moving from the inverse modelling approach embodied in provenance research to process-based forward models of sediment generation which provide detailed predictions of sediment properties meets with considerable (albeit not insurmountable

  6. Quantitative structure-activity relationship models for prediction of the toxicity of polybrominated diphenyl ether congeners.

    PubMed

    Wang, Yawei; Liu, Huanxiang; Zhao, Chunyan; Liu, Hanxia; Cai, Zongwei; Jiang, Guibin

    2005-07-01

    Levels of polybrominated diphenyl ethers (PBDEs) are increasing in the environment and may cause long-term health problems in humans. The similarity in the chemical structures of PBDEs and other halogenated aromatic pollutants hints on the possibility that they might share similar toxicological effects. In this work, three-dimensional quantitative structure activity relationships (3-D-QSAR) models, using comparative molecular field analysis (CoMFA) and comparative similarity indices analysis (CoMSIA), were built based on calculated structural indices and a reported experimental toxicology index (aryl hydrocarbon receptor relative binding affinities, RBA) of 18 PBDEs congeners, to determine the factors required for the RBA of these PBDEs. After performing leave-one-out cross-validation, satisfactory results were obtained with cross-validation O2 and R2 values of 0.580 and 0.995 by the CoMFA model and 0.680 and 0.982 by the CoMSIA model, respectively. The results showed clearly that the nonplanar conformations of PBDEs result in the lowest energy level and that the electrostatic index was the main factor reflecting the RBA of PBDEs. The two QSAR models were then used to predict the RBA value of 46 PBDEs for which experimental values are unavailable at present. PMID:16053097

  7. A quantitative structure--activity relationship model for the intrinsic activity of uncouplers of oxidative phosphorylation.

    PubMed

    Spycher, Simon; Escher, Beate I; Gasteiger, Johann

    2005-12-01

    A quantitative structure-activity relationship (QSAR) has been derived for the prediction of the activity of phenols in uncoupling oxidative and photophosphorylation. Twenty-one compounds with experimental data for uncoupling activity as well as for the acid dissociation constant, pKa, and for partitioning constants of the neutral and the charged species into model membranes were analyzed. From these measured data, the effective concentration in the membrane was derived, which allowed the study of the intrinsic activity of uncouplers within the membrane. A linear regression model for the intrinsic activity could be established using the following three descriptors: solvation free energies of the anions, an estimate for heterodimer formation describing transport processes, and pKa values describing the speciation of the phenols. In a next step, the aqueous effect concentrations were modeled by combining the model for the intrinsic uncoupling activity with descriptors accounting for the uptake into membranes. Results obtained with experimental membrane-water partitioning data were compared with the results obtained with experimental octanol-water partition coefficients, log Kow, and with calculated log Kow values. The properties of these different measures of lipophilicity were critically discussed. PMID:16359176

  8. An Efficient Bayesian Model Selection Approach for Interacting Quantitative Trait Loci Models With Many Effects

    PubMed Central

    Yi, Nengjun; Shriner, Daniel; Banerjee, Samprit; Mehta, Tapan; Pomp, Daniel; Yandell, Brian S.

    2007-01-01

    We extend our Bayesian model selection framework for mapping epistatic QTL in experimental crosses to include environmental effects and gene–environment interactions. We propose a new, fast Markov chain Monte Carlo algorithm to explore the posterior distribution of unknowns. In addition, we take advantage of any prior knowledge about genetic architecture to increase posterior probability on more probable models. These enhancements have significant computational advantages in models with many effects. We illustrate the proposed method by detecting new epistatic and gene–sex interactions for obesity-related traits in two real data sets of mice. Our method has been implemented in the freely available package R/qtlbim (http://www.qtlbim.org) to facilitate the general usage of the Bayesian methodology for genomewide interacting QTL analysis. PMID:17483424

  9. Quantitative Mapping of Reversible Mitochondrial Complex I Cysteine Oxidation in a Parkinson Disease Mouse Model*

    PubMed Central

    Danielson, Steven R.; Held, Jason M.; Oo, May; Riley, Rebeccah; Gibson, Bradford W.; Andersen, Julie K.

    2011-01-01

    Differential cysteine oxidation within mitochondrial Complex I has been quantified in an in vivo oxidative stress model of Parkinson disease. We developed a strategy that incorporates rapid and efficient immunoaffinity purification of Complex I followed by differential alkylation and quantitative detection using sensitive mass spectrometry techniques. This method allowed us to quantify the reversible cysteine oxidation status of 34 distinct cysteine residues out of a total 130 present in murine Complex I. Six Complex I cysteine residues were found to display an increase in oxidation relative to controls in brains from mice undergoing in vivo glutathione depletion. Three of these residues were found to reside within iron-sulfur clusters of Complex I, suggesting that their redox state may affect electron transport function. PMID:21196577

  10. A suite of models to support the quantitative assessment of spread in pest risk analysis.

    PubMed

    Robinet, Christelle; Kehlenbeck, Hella; Kriticos, Darren J; Baker, Richard H A; Battisti, Andrea; Brunel, Sarah; Dupin, Maxime; Eyre, Dominic; Faccoli, Massimo; Ilieva, Zhenya; Kenis, Marc; Knight, Jon; Reynaud, Philippe; Yart, Annie; van der Werf, Wopke

    2012-01-01

    Pest Risk Analyses (PRAs) are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens). Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice. PMID:23056174

  11. A Suite of Models to Support the Quantitative Assessment of Spread in Pest Risk Analysis

    PubMed Central

    Robinet, Christelle; Kehlenbeck, Hella; Kriticos, Darren J.; Baker, Richard H. A.; Battisti, Andrea; Brunel, Sarah; Dupin, Maxime; Eyre, Dominic; Faccoli, Massimo; Ilieva, Zhenya; Kenis, Marc; Knight, Jon; Reynaud, Philippe; Yart, Annie; van der Werf, Wopke

    2012-01-01

    Pest Risk Analyses (PRAs) are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens). Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice. PMID:23056174

  12. Quantitative assessment of biological impact using transcriptomic data and mechanistic network models

    SciTech Connect

    Thomson, Ty M.; Sewer, Alain; Martin, Florian; Belcastro, Vincenzo; Frushour, Brian P.; Gebel, Stephan; Park, Jennifer; Schlage, Walter K.; Talikka, Marja; Vasilyev, Dmitry M.; Westra, Jurjen W.; Hoeng, Julia; Peitsch, Manuel C.

    2013-11-01

    Exposure to biologically active substances such as therapeutic drugs or environmental toxicants can impact biological systems at various levels, affecting individual molecules, signaling pathways, and overall cellular processes. The ability to derive mechanistic insights from the resulting system responses requires the integration of experimental measures with a priori knowledge about the system and the interacting molecules therein. We developed a novel systems biology-based methodology that leverages mechanistic network models and transcriptomic data to quantitatively assess the biological impact of exposures to active substances. Hierarchically organized network models were first constructed to provide a coherent framework for investigating the impact of exposures at the molecular, pathway and process levels. We then validated our methodology using novel and previously published experiments. For both in vitro systems with simple exposure and in vivo systems with complex exposures, our methodology was able to recapitulate known biological responses matching expected or measured phenotypes. In addition, the quantitative results were in agreement with experimental endpoint data for many of the mechanistic effects that were assessed, providing further objective confirmation of the approach. We conclude that our methodology evaluates the biological impact of exposures in an objective, systematic, and quantifiable manner, enabling the computation of a systems-wide and pan-mechanistic biological impact measure for a given active substance or mixture. Our results suggest that various fields of human disease research, from drug development to consumer product testing and environmental impact analysis, could benefit from using this methodology. - Highlights: • The impact of biologically active substances is quantified at multiple levels. • The systems-level impact integrates the perturbations of individual networks. • The networks capture the relationships between

  13. Quantitative optical imaging of vascular response in vivo in a model of peripheral arterial disease.

    PubMed

    Poole, Kristin M; Tucker-Schwartz, Jason M; Sit, Wesley W; Walsh, Alex J; Duvall, Craig L; Skala, Melissa C

    2013-10-15

    The mouse hind limb ischemia (HLI) model is well established for studying collateral vessel formation and testing therapies for peripheral arterial disease, but there is a lack of quantitative techniques for intravitally analyzing blood vessel structure and function. To address this need, non-invasive, quantitative optical imaging techniques were developed to assess the time-course of recovery in the mouse HLI model. Hyperspectral imaging and optical coherence tomography (OCT) were used to non-invasively image hemoglobin oxygen saturation and microvessel morphology plus blood flow, respectively, in the anesthetized mouse after induction of HLI. Hyperspectral imaging detected significant increases in hemoglobin saturation in the ischemic paw as early as 3 days after femoral artery ligation (P < 0.01), and significant increases in distal blood flow were first detected with OCT 14 days postsurgery (P < 0.01). Intravital OCT images of the adductor muscle vasculature revealed corkscrew collateral vessels characteristic of the arteriogenic response to HLI. The hyperspectral imaging and OCT data significantly correlated with each other and with laser Doppler perfusion imaging (LDPI) and tissue oxygenation sensor data (P < 0.01). However, OCT measurements acquired depth-resolved information and revealed more sustained flow deficits following surgery that may be masked by more superficial measurements (LDPI, hyperspectral imaging). Therefore, intravital OCT may provide a robust biomarker for the late stages of ischemic limb recovery. This work validates non-invasive acquisition of both functional and morphological data with hyperspectral imaging and OCT. Together, these techniques provide cardiovascular researchers an unprecedented and comprehensive view of the temporal dynamics of HLI recovery in living mice. PMID:23955718

  14. Quantitative optical imaging of vascular response in vivo in a model of peripheral arterial disease

    PubMed Central

    Poole, Kristin M.; Tucker-Schwartz, Jason M.; Sit, Wesley W.; Walsh, Alex J.; Duvall, Craig L.

    2013-01-01

    The mouse hind limb ischemia (HLI) model is well established for studying collateral vessel formation and testing therapies for peripheral arterial disease, but there is a lack of quantitative techniques for intravitally analyzing blood vessel structure and function. To address this need, non-invasive, quantitative optical imaging techniques were developed to assess the time-course of recovery in the mouse HLI model. Hyperspectral imaging and optical coherence tomography (OCT) were used to non-invasively image hemoglobin oxygen saturation and microvessel morphology plus blood flow, respectively, in the anesthetized mouse after induction of HLI. Hyperspectral imaging detected significant increases in hemoglobin saturation in the ischemic paw as early as 3 days after femoral artery ligation (P < 0.01), and significant increases in distal blood flow were first detected with OCT 14 days postsurgery (P < 0.01). Intravital OCT images of the adductor muscle vasculature revealed corkscrew collateral vessels characteristic of the arteriogenic response to HLI. The hyperspectral imaging and OCT data significantly correlated with each other and with laser Doppler perfusion imaging (LDPI) and tissue oxygenation sensor data (P < 0.01). However, OCT measurements acquired depth-resolved information and revealed more sustained flow deficits following surgery that may be masked by more superficial measurements (LDPI, hyperspectral imaging). Therefore, intravital OCT may provide a robust biomarker for the late stages of ischemic limb recovery. This work validates non-invasive acquisition of both functional and morphological data with hyperspectral imaging and OCT. Together, these techniques provide cardiovascular researchers an unprecedented and comprehensive view of the temporal dynamics of HLI recovery in living mice. PMID:23955718

  15. A quantitative trait locus for variation in dopamine metabolism mapped in a primate model using reference sequences from related species

    PubMed Central

    Freimer, Nelson B.; Service, Susan K.; Ophoff, Roel A.; Jasinska, Anna J.; McKee, Kevin; Villeneuve, Amelie; Belisle, Alexandre; Bailey, Julia N.; Breidenthal, Sherry E.; Jorgensen, Matthew J.; Mann, J. John; Cantor, Rita M.; Dewar, Ken; Fairbanks, Lynn A.

    2007-01-01

    Non-human primates (NHP) provide crucial research models. Their strong similarities to humans make them particularly valuable for understanding complex behavioral traits and brain structure and function. We report here the genetic mapping of an NHP nervous system biologic trait, the cerebrospinal fluid (CSF) concentration of the dopamine metabolite homovanillic acid (HVA), in an extended inbred vervet monkey (Chlorocebus aethiops sabaeus) pedigree. CSF HVA is an index of CNS dopamine activity, which is hypothesized to contribute substantially to behavioral variations in NHP and humans. For quantitative trait locus (QTL) mapping, we carried out a two-stage procedure. We first scanned the genome using a first-generation genetic map of short tandem repeat markers. Subsequently, using >100 SNPs within the most promising region identified by the genome scan, we mapped a QTL for CSF HVA at a genome-wide level of significance (peak logarithm of odds score >4) to a narrow well delineated interval (<10 Mb). The SNP discovery exploited conserved segments between human and rhesus macaque reference genome sequences. Our findings demonstrate the potential of using existing primate reference genome sequences for designing high-resolution genetic analyses applicable across a wide range of NHP species, including the many for which full genome sequences are not yet available. Leveraging genomic information from sequenced to nonsequenced species should enable the utilization of the full range of NHP diversity in behavior and disease susceptibility to determine the genetic basis of specific biological and behavioral traits. PMID:17884980

  16. The role of pre-existing tectonic structures and magma chamber shape on the geometry of resurgent blocks: Analogue models

    NASA Astrophysics Data System (ADS)

    Marotta, Enrica; de Vita, Sandro

    2014-02-01

    A set of analogue models has been carried out to understand the role of an asymmetric magma chamber on the resurgence-related deformation of a previously deformed crustal sector. The results are then compared with those of similar experiments, previously performed using a symmetric magma chamber. Two lines of experiments were performed to simulate resurgence in an area with a simple graben-like structure and resurgence in a caldera that collapsed within the previously generated graben-like structure. On the basis of commonly accepted scaling laws, we used dry-quartz sand to simulate the brittle behaviour of the crust and Newtonian silicone to simulate the ductile behaviour of the intruding magma. An asymmetric shape of the magma chamber was simulated by moulding the upper surface of the silicone. The resulting empty space was then filled with sand. The results of the asymmetric-resurgence experiments are similar to those obtained with symmetrically shaped silicone. In the sample with a simple graben-like structure, resurgence occurs through the formation of a discrete number of differentially displaced blocks. The most uplifted portion of the deformed depression floor is affected by newly formed, high-angle, inward-dipping reverse ring-faults. The least uplifted portion of the caldera is affected by normal faults with similar orientation, either newly formed or resulting from reactivation of the pre-existing graben faults. This asymmetric block resurgence is also observed in experiments performed with a previous caldera collapse. In this case, the caldera-collapse-related reverse ring-fault is completely erased along the shortened side, and enhances the effect of the extensional faults on the opposite side, so facilitating the intrusion of the silicone. The most uplifted sector, due to an asymmetrically shaped intrusion, is always in correspondence of the thickest overburden. These results suggest that the stress field induced by resurgence is likely dictated by

  17. Final Report for Dynamic Models for Causal Analysis of Panel Data. Models for Change in Quantitative Variables, Part II Scholastic Models. Part II, Chapter 4.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This document is part of a series of chapters described in SO 011 759. Stochastic models for the sociological analysis of change and the change process in quantitative variables are presented. The author lays groundwork for the statistical treatment of simple stochastic differential equations (SDEs) and discusses some of the continuities of…

  18. Quantitative prediction of integrase inhibitor resistance from genotype through consensus linear regression modeling

    PubMed Central

    2013-01-01

    Background Integrase inhibitors (INI) form a new drug class in the treatment of HIV-1 patients. We developed a linear regression modeling approach to make a quantitative raltegravir (RAL) resistance phenotype prediction, as Fold Change in IC50 against a wild type virus, from mutations in the integrase genotype. Methods We developed a clonal genotype-phenotype database with 991 clones from 153 clinical isolates of INI naïve and RAL treated patients, and 28 site-directed mutants. We did the development of the RAL linear regression model in two stages, employing a genetic algorithm (GA) to select integrase mutations by consensus. First, we ran multiple GAs to generate first order linear regression models (GA models) that were stochastically optimized to reach a goal R2 accuracy, and consisted of a fixed-length subset of integrase mutations to estimate INI resistance. Secondly, we derived a consensus linear regression model in a forward stepwise regression procedure, considering integrase mutations or mutation pairs by descending prevalence in the GA models. Results The most frequently occurring mutations in the GA models were 92Q, 97A, 143R and 155H (all 100%), 143G (90%), 148H/R (89%), 148K (88%), 151I (81%), 121Y (75%), 143C (72%), and 74M (69%). The RAL second order model contained 30 single mutations and five mutation pairs (p < 0.01): 143C/R&97A, 155H&97A/151I and 74M&151I. The R2 performance of this model on the clonal training data was 0.97, and 0.78 on an unseen population genotype-phenotype dataset of 171 clinical isolates from RAL treated and INI naïve patients. Conclusions We describe a systematic approach to derive a model for predicting INI resistance from a limited amount of clonal samples. Our RAL second order model is made available as an Additional file for calculating a resistance phenotype as the sum of integrase mutations and mutation pairs. PMID:23282253

  19. Quantitative Comparison of a New Ab Initio Micrometeor Ablation Model with an Observationally Verifiable Standard Model

    NASA Astrophysics Data System (ADS)

    Meisel, David D.; Szasz, Csilla; Kero, Johan

    2008-06-01

    The Arecibo UHF radar is able to detect the head-echos of micron-sized meteoroids up to velocities of 75 km/s over a height range of 80 140 km. Because of their small size there are many uncertainties involved in calculating their above atmosphere properties as needed for orbit determination. An ab initio model of meteor ablation has been devised that should work over the mass range 10-16 kg to 10-7 kg, but the faint end of this range cannot be observed by any other method and so direct verification is not possible. On the other hand, the EISCAT UHF radar system detects micrometeors in the high mass part of this range and its observations can be fit to a “standard” ablation model and calibrated to optical observations (Szasz et al. 2007). In this paper, we present a preliminary comparison of the two models, one observationally confirmable. Among the features of the ab initio model that are different from the “standard” model are: (1) uses the experimentally based low pressure vaporization theory of O’Hanlon (A users’s guide to vacuum technology, 2003) for ablation, (2) uses velocity dependent functions fit from experimental data on heat transfer, luminosity and ionization efficiencies measured by Friichtenicht and Becker (NASA Special Publication 319: 53, 1973) for micron sized particles, (3) assumes a density and temperature dependence of the micrometeoroids and ablation product specific heats, (4) assumes a density and size dependent value for the thermal emissivity and (5) uses a unified synthesis of experimental data for the most important meteoroid elements and their oxides through least square fits (as functions of temperature, density, and/or melting point) of the tables of thermodynamic parameters given in Weast (CRC Handbook of Physics and Chemistry, 1984), Gray (American Institute of Physics Handbook, 1972), and Cox (Allen’s Astrophysical Quantities 2000). This utilization of mostly experimentally determined data is the main reason for

  20. 40 CFR Table 4 to Subpart Mmmm of... - Model Rule-Operating Parameters for Existing Sewage Sludge Incineration Units a

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Existing Sewage Sludge Incineration Units a 4 Table 4 to Subpart MMMM of Part 60 Protection of Environment... SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Pt. 60... Sewage Sludge Incineration Units a For these operating parameters You must establish these...

  1. 40 CFR Table 4 to Subpart Mmmm of... - Model Rule-Operating Parameters for Existing Sewage Sludge Incineration Units a

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Existing Sewage Sludge Incineration Units a 4 Table 4 to Subpart MMMM of Part 60 Protection of Environment... SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge Incineration Units Pt. 60... Sewage Sludge Incineration Units a For these operating parameters You must establish these...

  2. Quantitative structure-retention relationship modeling of gas chromatographic retention times based on thermodynamic data.

    PubMed

    Ebrahimi-Najafabadi, Heshmatollah; McGinitie, Teague M; Harynuk, James J

    2014-09-01

    Thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for 156 compounds comprising alkanes, alkyl halides and alcohols were determined for a 5% phenyl 95% methyl stationary phase. The determination of thermodynamic parameters relies on a Nelder-Mead simplex optimization to rapidly obtain the parameters. Two methodologies of external and leave one out cross validations were applied to assess the robustness of the estimations of thermodynamic parameters. The largest absolute errors in predicted retention time across all temperature ramps and all compounds were 1.5 and 0.3s for external and internal sets, respectively. The possibility of an in silico extension of the thermodynamic library was tested using a quantitative structure-retention relationship (QSRR) methodology. The estimated thermodynamic parameters were utilized to develop QSRR models. Individual partial least squares (PLS) models were developed for each of the three classes of the molecules. R(2) values for the test sets of all models across all temperature ramps were larger than 0.99 and the average of relative errors in retention time predictions of the test sets for alkanes, alcohols, and alkyl halides were 1.8%, 2.4%, and 2.5%, respectively. PMID:25035236

  3. Disentangling the Complexity of HGF Signaling by Combining Qualitative and Quantitative Modeling.

    PubMed

    D'Alessandro, Lorenza A; Samaga, Regina; Maiwald, Tim; Rho, Seong-Hwan; Bonefas, Sandra; Raue, Andreas; Iwamoto, Nao; Kienast, Alexandra; Waldow, Katharina; Meyer, Rene; Schilling, Marcel; Timmer, Jens; Klamt, Steffen; Klingmüller, Ursula

    2015-04-01

    Signaling pathways are characterized by crosstalk, feedback and feedforward mechanisms giving rise to highly complex and cell-context specific signaling networks. Dissecting the underlying relations is crucial to predict the impact of targeted perturbations. However, a major challenge in identifying cell-context specific signaling networks is the enormous number of potentially possible interactions. Here, we report a novel hybrid mathematical modeling strategy to systematically unravel hepatocyte growth factor (HGF) stimulated phosphoinositide-3-kinase (PI3K) and mitogen activated protein kinase (MAPK) signaling, which critically contribute to liver regeneration. By combining time-resolved quantitative experimental data generated in primary mouse hepatocytes with interaction graph and ordinary differential equation modeling, we identify and experimentally validate a network structure that represents the experimental data best and indicates specific crosstalk mechanisms. Whereas the identified network is robust against single perturbations, combinatorial inhibition strategies are predicted that result in strong reduction of Akt and ERK activation. Thus, by capitalizing on the advantages of the two modeling approaches, we reduce the high combinatorial complexity and identify cell-context specific signaling networks. PMID:25905717

  4. Switching mechanism for TiO2 memristor and quantitative analysis of exponential model parameters

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Ping; Chen, Min; Shen, Yi

    2015-08-01

    The memristor, as the fourth basic circuit element, has drawn worldwide attention since its physical implementation was released by HP Labs in 2008. However, at the nano-scale, there are many difficulties for memristor physical realization. So a better understanding and analysis of a good model will help us to study the characteristics of a memristor. In this paper, we analyze a possible mechanism for the switching behavior of a memristor with a Pt/TiO2/Pt structure, and explain the changes of electronic barrier at the interface of Pt/TiO2. Then, a quantitative analysis about each parameter in the exponential model of memristor is conducted based on the calculation results. The analysis results are validated by simulation results. The efforts made in this paper will provide researchers with theoretical guidance on choosing appropriate values for (α, β, χ, γ) in this exponential model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61374150 and 61374171), the State Key Program of the National Natural Science Foundation of China (Grant No. 61134012), the National Basic Research Program of China (Grant No. 2011CB710606), and the Fundamental Research Funds for the Central Universities, China (Grant No. 2013TS126).

  5. A quantitative model for using acridine orange as a transmembrane pH gradient probe.

    PubMed

    Clerc, S; Barenholz, Y

    1998-05-15

    Monitoring the acidification of the internal space of membrane vesicles by proton pumps can be achieved easily with optical probes. Transmembrane pH gradients cause a blue-shift in the absorbance spectrum and the quenching of the fluorescence of the cationic dye acridine orange. It has been postulated that these changes are caused by accumulation and aggregation of the dye inside the vesicles. We tested this hypothesis using liposomes with transmembrane concentration gradients of ammonium sulfate as model system. Fluorescence intensity of acridine orange solutions incubated with liposomes was affected by magnitude of the gradient, volume trapped by vesicles, and temperature. These experimental data were compared to a theoretical model describing the accumulation of acridine orange monomers in the vesicles according to the inside-to-outside ratio of proton concentrations, and the intravesicular formation of sandwich-like piles of acridine orange cations. This theoretical model predicted quantitatively the relationship between the transmembrane pH gradients and spectral changes of acridine orange. Therefore, adequate characterization of aggregation of dye in the lumen of biological vesicles provides the theoretical basis for using acridine orange as an optical probe to quantify transmembrane pH gradients. PMID:9606150

  6. Joint prediction of multiple quantitative traits using a Bayesian multivariate antedependence model

    PubMed Central

    Jiang, J; Zhang, Q; Ma, L; Li, J; Wang, Z; Liu, J-F

    2015-01-01

    Predicting organismal phenotypes from genotype data is important for preventive and personalized medicine as well as plant and animal breeding. Although genome-wide association studies (GWAS) for complex traits have discovered a large number of trait- and disease-associated variants, phenotype prediction based on associated variants is usually in low accuracy even for a high-heritability trait because these variants can typically account for a limited fraction of total genetic variance. In comparison with GWAS, the whole-genome prediction (WGP) methods can increase prediction accuracy by making use of a huge number of variants simultaneously. Among various statistical methods for WGP, multiple-trait model and antedependence model show their respective advantages. To take advantage of both strategies within a unified framework, we proposed a novel multivariate antedependence-based method for joint prediction of multiple quantitative traits using a Bayesian algorithm via modeling a linear relationship of effect vector between each pair of adjacent markers. Through both simulation and real-data analyses, our studies demonstrated that the proposed antedependence-based multiple-trait WGP method is more accurate and robust than corresponding traditional counterparts (Bayes A and multi-trait Bayes A) under various scenarios. Our method can be readily extended to deal with missing phenotypes and resequence data with rare variants, offering a feasible way to jointly predict phenotypes for multiple complex traits in human genetic epidemiology as well as plant and livestock breeding. PMID:25873147

  7. Using Modified Contour Deformable Model to Quantitatively Estimate Ultrasound Parameters for Osteoporosis Assessment

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Fu; Du, Yi-Chun; Tsai, Yi-Ting; Chen, Tainsong

    Osteoporosis is a systemic skeletal disease, which is characterized by low bone mass and micro-architectural deterioration of bone tissue, leading to bone fragility. Finding an effective method for prevention and early diagnosis of the disease is very important. Several parameters, including broadband ultrasound attenuation (BUA), speed of sound (SOS), and stiffness index (STI), have been used to measure the characteristics of bone tissues. In this paper, we proposed a method, namely modified contour deformable model (MCDM), bases on the active contour model (ACM) and active shape model (ASM) for automatically detecting the calcaneus contour from quantitative ultrasound (QUS) parametric images. The results show that the difference between the contours detected by the MCDM and the true boundary for the phantom is less than one pixel. By comparing the phantom ROIs, significant relationship was found between contour mean and bone mineral density (BMD) with R=0.99. The influence of selecting different ROI diameters (12, 14, 16 and 18 mm) and different region-selecting methods, including fixed region (ROI fix ), automatic circular region (ROI cir ) and calcaneal contour region (ROI anat ), were evaluated for testing human subjects. Measurements with large ROI diameters, especially using fixed region, result in high position errors (10-45%). The precision errors of the measured ultrasonic parameters for ROI anat are smaller than ROI fix and ROI cir . In conclusion, ROI anat provides more accurate measurement of ultrasonic parameters for the evaluation of osteoporosis and is useful for clinical application.

  8. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    NASA Astrophysics Data System (ADS)

    Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars

    2013-08-01

    There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.

  9. A quantitative structure-activity relationship model for radical scavenging activity of flavonoids.

    PubMed

    Om, A; Kim, J H

    2008-03-01

    A quantitative structure-activity relationship (QSAR) study has been carried out for a training set of 29 flavonoids to correlate and predict the 1,1-diphenyl-2-picrylhydrazyl radical scavenging activity (RSA) values obtained from published data. Genetic algorithm and multiple linear regression were employed to select the descriptors and to generate the best prediction model that relates the structural features to the RSA activities using (1) three-dimensional (3D) Dragon (TALETE srl, Milan, Italy) descriptors and (2) semi-empirical descriptor calculations. The predictivity of the models was estimated by cross-validation with the leave-one-out method. The result showed that a significant improvement of the statistical indices was obtained by deleting outliers. Based on the data for the compounds used in this study, our results suggest a QSAR model of RSA that is based on the following descriptors: 3D-Morse, WHIM, and GETAWAY. Therefore, satisfactory relationships between RSA and the semi-empirical descriptors were found, demonstrating that the energy of the highest occupied molecular orbital, total energy, and energy of heat of formation contributed more significantly than all other descriptors. PMID:18361735

  10. Quantitative structure-activity relationship models of clinical pharmacokinetics: clearance and volume of distribution.

    PubMed

    Gombar, Vijay K; Hall, Stephen D

    2013-04-22

    Reliable prediction of two fundamental human pharmacokinetic (PK) parameters, systemic clearance (CL) and apparent volume of distribution (Vd), determine the size and frequency of drug dosing and are at the heart of drug discovery and development. Traditionally, estimated CL and Vd are derived from preclinical in vitro and in vivo absorption, distribution, metabolism, and excretion (ADME) measurements. In this paper, we report quantitative structure-activity relationship (QSAR) models for prediction of systemic CL and steady-state Vd (Vdss) from intravenous (iv) dosing in humans. These QSAR models avoid uncertainty associated with preclinical-to-clinical extrapolation and require two-dimensional structure drawing as the sole input. The clean, uniform training sets for these models were derived from the compilation published by Obach et al. (Drug Metab. Disp. 2008, 36, 1385-1405). Models for CL and Vdss were developed using both a support vector regression (SVR) method and a multiple linear regression (MLR) method. The SVR models employ a minimum of 2048-bit fingerprints developed in-house as structure quantifiers. The MLR models, on the other hand, are based on information-rich electro-topological states of two-atom fragments as descriptors and afford reverse QSAR (RQSAR) analysis to help model-guided, in silico modulation of structures for desired CL and Vdss. The capability of the models to predict iv CL and Vdss with acceptable accuracy was established by randomly splitting data into training and test sets. On average, for both CL and Vdss, 75% of test compounds were predicted within 2.5-fold of the value observed and 90% of test compounds were within 5.0-fold of the value observed. The performance of the final models developed from 525 compounds for CL and 569 compounds for Vdss was evaluated on an external set of 56 compounds. The predictions were either better or comparable to those predicted by other in silico models reported in the literature. To

  11. A Humanized Clinically Calibrated Quantitative Systems Pharmacology Model for Hypokinetic Motor Symptoms in Parkinson's Disease.

    PubMed

    Roberts, Patrick; Spiros, Athan; Geerts, Hugo

    2016-01-01

    The current treatment of Parkinson's disease with dopamine-centric approaches such as L-DOPA and dopamine agonists, although very successful, is in need of alternative treatment strategies, both in terms of disease modification and symptom management. Various non-dopaminergic treatment approaches did not result in a clear clinical benefit, despite showing a clear effect in preclinical animal models. In addition, polypharmacy is common, sometimes leading to unintended effects on non-motor cognitive and psychiatric symptoms. To explore novel targets for symptomatic treatment and possible synergistic pharmacodynamic effects between different drugs, we developed a computer-based Quantitative Systems Pharmacology (QSP) platform of the closed cortico-striatal-thalamic-cortical basal ganglia loop of the dorsal motor circuit. This mechanism-based simulation platform is based on the known neuro-anatomy and neurophysiology of the basal ganglia and explicitly incorporates domain expertise in a formalized way. The calculated beta/gamma power ratio of the local field potential in the subthalamic nucleus correlates well (R (2) = 0.71) with clinically observed extra-pyramidal symptoms triggered by antipsychotics during schizophrenia treatment (43 drug-dose combinations). When incorporating Parkinsonian (PD) pathology and reported compensatory changes, the computer model suggests a major increase in b/g ratio (corresponding to bradykinesia and rigidity) from a dopamine depletion of 70% onward. The correlation between the outcome of the QSP model and the reported changes in UPDRS III Motor Part for 22 placebo-normalized drug-dose combinations is R (2) = 0.84. The model also correctly recapitulates the lack of clinical benefit for perampanel, MK-0567 and flupirtine and offers a hypothesis for the translational disconnect. Finally, using human PET imaging studies with placebo response, the computer model predicts well the placebo response for chronic treatment, but not for acute

  12. Quantitative modeling of bioconcentration factors of carbonyl herbicides using multivariate image analysis.

    PubMed

    Freitas, Mirlaine R; Barigye, Stephen J; Daré, Joyce K; Freitas, Matheus P

    2016-06-01

    The bioconcentration factor (BCF) is an important parameter used to estimate the propensity of chemicals to accumulate in aquatic organisms from the ambient environment. While simple regressions for estimating the BCF of chemical compounds from water solubility or the n-octanol/water partition coefficient have been proposed in the literature, these models do not always yield good correlations and more descriptive variables are required for better modeling of BCF data for a given series of organic pollutants, such as some herbicides. Thus, the logBCF values for a set of carbonyl herbicides comprising amide, urea, carbamate and thiocarbamate groups were quantitatively modeled using multivariate image analysis (MIA) descriptors, derived from colored image representations for chemical structures. The logBCF model was calibrated and vigorously validated (r(2) = 0.79, q(2) = 0.70 and rtest(2) = 0.81), providing a comprehensive three-parameter linear equation after variable selection (logBCF = 5.682 - 0.00233 × X9774 - 0.00070 × X813 - 0.00273 × X5144); the variables represent pixel coordinates in the multivariate image. Finally, chemical interpretation of the obtained models in terms of the structural characteristics responsible for the enhanced or reduced logBCF values was performed, providing key leads in the prospective development of more eco-friendly synthetic herbicides. PMID:26971171

  13. Mixed linear model approach for mapping quantitative trait loci underlying crop seed traits

    PubMed Central

    Qi, T; Jiang, B; Zhu, Z; Wei, C; Gao, Y; Zhu, S; Xu, H; Lou, X

    2014-01-01

    The crop seed is a complex organ that may be composed of the diploid embryo, the triploid endosperm and the diploid maternal tissues. According to the genetic features of seed characters, two genetic models for mapping quantitative trait loci (QTLs) of crop seed traits are proposed, with inclusion of maternal effects, embryo or endosperm effects of QTL, environmental effects and QTL-by-environment (QE) interactions. The mapping population can be generated either from double back-cross of immortalized F2 (IF2) to the two parents, from random-cross of IF2 or from selfing of IF2 population. Candidate marker intervals potentially harboring QTLs are first selected through one-dimensional scanning across the whole genome. The selected candidate marker intervals are then included in the model as cofactors to control background genetic effects on the putative QTL(s). Finally, a QTL full model is constructed and model selection is conducted to eliminate false positive QTLs. The genetic main effects of QTLs, QE interaction effects and the corresponding P-values are computed by Markov chain Monte Carlo algorithm for Gaussian mixed linear model via Gibbs sampling. Monte Carlo simulations were performed to investigate the reliability and efficiency of the proposed method. The simulation results showed that the proposed method had higher power to accurately detect simulated QTLs and properly estimated effect of these QTLs. To demonstrate the usefulness, the proposed method was used to identify the QTLs underlying fiber percentage in an upland cotton IF2 population. A computer software, QTLNetwork-Seed, was developed for QTL analysis of seed traits. PMID:24619175

  14. Analysis of Water Conflicts across Natural and Societal Boundaries: Integration of Quantitative Modeling and Qualitative Reasoning

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Balaram, P.; Islam, S.

    2009-12-01

    , the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.

  15. Parametric modeling for quantitative analysis of pulmonary structure to function relationships

    NASA Astrophysics Data System (ADS)

    Haider, Clifton R.; Bartholmai, Brian J.; Holmes, David R., III; Camp, Jon J.; Robb, Richard A.

    2005-04-01

    While lung anatomy is well understood, pulmonary structure-to-function relationships such as the complex elastic deformation of the lung during respiration are less well documented. Current methods for studying lung anatomy include conventional chest radiography, high-resolution computed tomography (CT scan) and magnetic resonance imaging with polarized gases (MRI scan). Pulmonary physiology can be studied using spirometry or V/Q nuclear medicine tests (V/Q scan). V/Q scanning and MRI scans may demonstrate global and regional function. However, each of these individual imaging methods lacks the ability to provide high-resolution anatomic detail, associated pulmonary mechanics and functional variability of the entire respiratory cycle. Specifically, spirometry provides only a one-dimensional gross estimate of pulmonary function, and V/Q scans have poor spatial resolution, reducing its potential for regional assessment of structure-to-function relationships. We have developed a method which utilizes standard clinical CT scanning to provide data for computation of dynamic anatomic parametric models of the lung during respiration which correlates high-resolution anatomy to underlying physiology. The lungs are segmented from both inspiration and expiration three-dimensional (3D) data sets and transformed into a geometric description of the surface of the lung. Parametric mapping of lung surface deformation then provides a visual and quantitative description of the mechanical properties of the lung. Any alteration in lung mechanics is manifest by alterations in normal deformation of the lung wall. The method produces a high-resolution anatomic and functional composite picture from sparse temporal-spatial methods which quantitatively illustrates detailed anatomic structure to pulmonary function relationships impossible for translational methods to provide.

  16. Physiologically Based Pharmacokinetic Modeling Framework for Quantitative Prediction of an Herb–Drug Interaction

    PubMed Central

    Brantley, S J; Gufford, B T; Dua, R; Fediuk, D J; Graf, T N; Scarlett, Y V; Frederick, K S; Fisher, M B; Oberlies, N H; Paine, M F

    2014-01-01

    Herb–drug interaction predictions remain challenging. Physiologically based pharmacokinetic (PBPK) modeling was used to improve prediction accuracy of potential herb–drug interactions using the semipurified milk thistle preparation, silibinin, as an exemplar herbal product. Interactions between silibinin constituents and the probe substrates warfarin (CYP2C9) and midazolam (CYP3A) were simulated. A low silibinin dose (160 mg/day × 14 days) was predicted to increase midazolam area under the curve (AUC) by 1%, which was corroborated with external data; a higher dose (1,650 mg/day × 7 days) was predicted to increase midazolam and (S)-warfarin AUC by 5% and 4%, respectively. A proof-of-concept clinical study confirmed minimal interaction between high-dose silibinin and both midazolam and (S)-warfarin (9 and 13% increase in AUC, respectively). Unexpectedly, (R)-warfarin AUC decreased (by 15%), but this is unlikely to be clinically important. Application of this PBPK modeling framework to other herb–drug interactions could facilitate development of guidelines for quantitative prediction of clinically relevant interactions. PMID:24670388

  17. Establishment of Quantitative Severity Evaluation Model for Spinal Cord Injury by Metabolomic Fingerprinting

    PubMed Central

    Yang, Hao; Cohen, Mitchell Jay; Chen, Wei; Sun, Ming-Wei; Lu, Charles Damien

    2014-01-01

    Spinal cord injury (SCI) is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR) screening, we identified 15 metabolites that made up an “Eigen-metabolome” capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD–NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use. PMID:24727691

  18. Towards quantitative modelling of surface deformation of polymer micro-structures under tactile scanning measurement

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brand, Uwe; Ahbe, Thomas

    2014-04-01

    Contact stylus-based surface profilometry is capable of topography measurement whilst being independent of the physical, electrical and optical properties of the materials under test, and has therefore become an indispensable tool for dimensional measurement of transparent specimens. However, large measurement deviations will appear when soft specimens, especially specimens made of polymers, are measured by contact stylus profilometry. In this paper the surface deformation behaviour of two polymers for molding and one photoresist, i.e. Ormocomp, Ormoclad and SU-8, under different tactile measurement conditions have been experimentally investigated. An empirical analytical model is hereby proposed for the prediction of surface deformation of soft specimens under tactile (sliding) contact. Preliminary experimental results demonstrate that the proposed five-parameter model is applicable for describing the deformation behaviour of these thermoplastic materials under the scanning speed ranging from 2 to 200 μm s-1 and the probing force varying from 5 to 500 μN. In addition, thanks to quantitative topographical measurements of the layer thickness of the aforementioned photoresists, the scratch behaviour and the time-dependent mechanical properties of these materials have also been experimentally determined.

  19. Probabilistic Quantitative Precipitation Forecasting over East China using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Yang, Ai; Yuan, Huiling

    2014-05-01

    The Bayesian model averaging (BMA) is a post-processing method that weights the predictive probability density functions (PDFs) of individual ensemble members. This study investigates the BMA method for calibrating quantitative precipitation forecasts (QPFs) from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database. The QPFs over East Asia during summer (June-August) 2008-2011 are generated from six operational ensemble prediction systems (EPSs), including ECMWF, UKMO, NCEP, CMC, JMA, CMA, and multi-center ensembles of their combinations. The satellite-based precipitation estimate product TRMM 3B42 V7 is used as the verification dataset. In the BMA post-processing for precipitation forecasts, the PDF matching method is first applied to bias-correct systematic errors in each forecast member, by adjusting PDFs of forecasts to match PDFs of observations. Next, a logistic regression and two-parameter gamma distribution are used to fit the probability of rainfall occurrence and precipitation distribution. Through these two steps, the BMA post-processing bias-corrects ensemble forecasts systematically. The 60-70% cumulative density function (CDF) predictions well estimate moderate precipitation compared to raw ensemble mean, while the 90% upper boundary of BMA CDF predictions can be set as a threshold of extreme precipitation alarm. In general, the BMA method is more capable of multi-center ensemble post-processing, which improves probabilistic QPFs (PQPFs) with better ensemble spread and reliability. KEYWORDS: Bayesian model averaging (BMA); post-processing; ensemble forecast; TIGGE

  20. Physiologically based pharmacokinetic modeling framework for quantitative prediction of an herb-drug interaction.

    PubMed

    Brantley, S J; Gufford, B T; Dua, R; Fediuk, D J; Graf, T N; Scarlett, Y V; Frederick, K S; Fisher, M B; Oberlies, N H; Paine, M F

    2014-01-01

    Herb-drug interaction predictions remain challenging. Physiologically based pharmacokinetic (PBPK) modeling was used to improve prediction accuracy of potential herb-drug interactions using the semipurified milk thistle preparation, silibinin, as an exemplar herbal product. Interactions between silibinin constituents and the probe substrates warfarin (CYP2C9) and midazolam (CYP3A) were simulated. A low silibinin dose (160 mg/day × 14 days) was predicted to increase midazolam area under the curve (AUC) by 1%, which was corroborated with external data; a higher dose (1,650 mg/day × 7 days) was predicted to increase midazolam and (S)-warfarin AUC by 5% and 4%, respectively. A proof-of-concept clinical study confirmed minimal interaction between high-dose silibinin and both midazolam and (S)-warfarin (9 and 13% increase in AUC, respectively). Unexpectedly, (R)-warfarin AUC decreased (by 15%), but this is unlikely to be clinically important. Application of this PBPK modeling framework to other herb-drug interactions could facilitate development of guidelines for quantitative prediction of clinically relevant interactions.CPT Pharmacometrics Syst. Pharmacol. (2014) 3, e107; doi:10.1038/psp.2013.69; advance online publication 26 March 2014. PMID:24670388

  1. How plants manage food reserves at night: quantitative models and open questions

    PubMed Central

    Scialdone, Antonio; Howard, Martin

    2015-01-01

    In order to cope with night-time darkness, plants during the day allocate part of their photosynthate for storage, often as starch. This stored reserve is then degraded at night to sustain metabolism and growth. However, night-time starch degradation must be tightly controlled, as over-rapid turnover results in premature depletion of starch before dawn, leading to starvation. Recent experiments in Arabidopsis have shown that starch degradation proceeds at a constant rate during the night and is set such that starch reserves are exhausted almost precisely at dawn. Intriguingly, this pattern is robust with the degradation rate being adjusted to compensate for unexpected changes in the time of darkness onset. While a fundamental role for the circadian clock is well-established, the underlying mechanisms controlling starch degradation remain poorly characterized. Here, we discuss recent quantitative models that have been proposed to explain how plants can compute the appropriate starch degradation rate, a process that requires an effective arithmetic division calculation. We review experimental confirmation of the models, and describe aspects that require further investigation. Overall, the process of night-time starch degradation necessitates a fundamental metabolic role for the circadian clock and, more generally, highlights how cells process information in order to optimally manage their resources. PMID:25873925

  2. Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.

  3. Establishment of quantitative severity evaluation model for spinal cord injury by metabolomic fingerprinting.

    PubMed

    Peng, Jin; Zeng, Jun; Cai, Bin; Yang, Hao; Cohen, Mitchell Jay; Chen, Wei; Sun, Ming-Wei; Lu, Charles Damien; Jiang, Hua

    2014-01-01

    Spinal cord injury (SCI) is a devastating event with a limited hope for recovery and represents an enormous public health issue. It is crucial to understand the disturbances in the metabolic network after SCI to identify injury mechanisms and opportunities for treatment intervention. Through plasma 1H-nuclear magnetic resonance (NMR) screening, we identified 15 metabolites that made up an "Eigen-metabolome" capable of distinguishing rats with severe SCI from healthy control rats. Forty enzymes regulated these 15 metabolites in the metabolic network. We also found that 16 metabolites regulated by 130 enzymes in the metabolic network impacted neurobehavioral recovery. Using the Eigen-metabolome, we established a linear discrimination model to cluster rats with severe and mild SCI and control rats into separate groups and identify the interactive relationships between metabolic biomarkers in the global metabolic network. We identified 10 clusters in the global metabolic network and defined them as distinct metabolic disturbance domains of SCI. Metabolic paths such as retinal, glycerophospholipid, arachidonic acid metabolism; NAD-NADPH conversion process, tyrosine metabolism, and cadaverine and putrescine metabolism were included. In summary, we presented a novel interdisciplinary method that integrates metabolomics and global metabolic network analysis to visualize metabolic network disturbances after SCI. Our study demonstrated the systems biological study paradigm that integration of 1H-NMR, metabolomics, and global metabolic network analysis is useful to visualize complex metabolic disturbances after severe SCI. Furthermore, our findings may provide a new quantitative injury severity evaluation model for clinical use. PMID:24727691

  4. Quantitative modelling of amyloidogenic processing and its influence by SORLA in Alzheimer's disease.

    PubMed

    Schmidt, Vanessa; Baum, Katharina; Lao, Angelyn; Rateitschak, Katja; Schmitz, Yvonne; Teichmann, Anke; Wiesner, Burkhard; Petersen, Claus Munck; Nykjaer, Anders; Wolf, Jana; Wolkenhauer, Olaf; Willnow, Thomas E

    2012-01-01

    The extent of proteolytic processing of the amyloid precursor protein (APP) into neurotoxic amyloid-β (Aβ) peptides is central to the pathology of Alzheimer's disease (AD). Accordingly, modifiers that increase Aβ production rates are risk factors in the sporadic form of AD. In a novel systems biology approach, we combined quantitative biochemical studies with mathematical modelling to establish a kinetic model of amyloidogenic processing, and to evaluate the influence by SORLA/SORL1, an inhibitor of APP processing and important genetic risk factor. Contrary to previous hypotheses, our studies demonstrate that secretases represent allosteric enzymes that require cooperativity by APP oligomerization for efficient processing. Cooperativity enables swift adaptive changes in secretase activity with even small alterations in APP concentration. We also show that SORLA prevents APP oligomerization both in cultured cells and in the brain in vivo, eliminating the preferred form of the substrate and causing secretases to switch to a less efficient non-allosteric mode of action. These data represent the first mathematical description of the contribution of genetic risk factors to AD substantiating the relevance of subtle changes in SORLA levels for amyloidogenic processing as proposed for patients carrying SORL1 risk alleles. PMID:21989385

  5. A Quantitative System Pharmacology Computer Model for Cognitive Deficits in Schizophrenia

    PubMed Central

    Geerts, H; Roberts, P; Spiros, A

    2013-01-01

    Although the positive symptoms of schizophrenia are reasonably well-controlled by current antipsychotics, cognitive impairment remains largely unaddressed. The Matrics initiative lays out a regulatory path forward and a number of targets have been tested in the clinic, so far without much success. To address this translational disconnect, we have developed a mechanism-based humanized computer model of a relevant key cortical brain network with schizophrenia pathology involved with the maintenance aspect of working memory (WM). The model is calibrated using published clinical experiments on N-back WM tests. We further simulate the opposite effect of γ-aminobutyric acid (GABA) modulators lorazepam and flumazenil and of a published augmentation trial of clozapine with risperidone, illustrating the introduction of new targets and the capacity of predicting the effects of polypharmacy. This humanized approach allows for early prospective and quantitative assessment of cognitive outcome in a central nervous system (CNS) research and development project, thereby hopefully increasing the success rate of clinical trials. PMID:23887686

  6. A quantitative analysis of 3-D coronary modeling from two or more projection images.

    PubMed

    Movassaghi, B; Rasche, V; Grass, M; Viergever, M A; Niessen, W J

    2004-12-01

    A method is introduced to examine the geometrical accuracy of the three-dimensional (3-D) representation of coronary arteries from multiple (two and more) calibrated two-dimensional (2-D) angiographic projections. When involving more then two projections, (multiprojection modeling) a novel procedure is presented that consists of fully automated centerline and width determination in all available projections based on the information provided by the semi-automated centerline detection in two initial calibrated projections. The accuracy of the 3-D coronary modeling approach is determined by a quantitative examination of the 3-D centerline point position and the 3-D cross sectional area of the reconstructed objects. The measurements are based on the analysis of calibrated phantom and calibrated coronary 2-D projection data. From this analysis a confidence region (alpha degrees approximately equal to [35 degrees - 145 degrees]) for the angular distance of two initial projection images is determined for which the modeling procedure is sufficiently accurate for the applied system. Within this angular border range the centerline position error is less then 0.8 mm, in terms of the Euclidean distance to a predefined ground truth. When involving more projections using our new procedure, experiments show that when the initial pair of projection images has an angular distance in the range alpha degrees approximately equal to [35 degrees - 145 degrees], the centerlines in all other projections (gamma = 0 degrees - 180 degrees) were indicated very precisely without any additional centering procedure. When involving additional projection images in the modeling procedure a more realistic shape of the structure can be provided. In case of the concave segment, however, the involvement of multiple projections does not necessarily provide a more realistic shape of the reconstructed structure. PMID:15575409

  7. Quantitative Phosphoproteomics Reveals Wee1 Kinase as a Therapeutic Target in a Model of Proneural Glioblastoma.

    PubMed

    Lescarbeau, Rebecca S; Lei, Liang; Bakken, Katrina K; Sims, Peter A; Sarkaria, Jann N; Canoll, Peter; White, Forest M

    2016-06-01

    Glioblastoma (GBM) is the most common malignant primary brain cancer. With a median survival of about a year, new approaches to treating this disease are necessary. To identify signaling molecules regulating GBM progression in a genetically engineered murine model of proneural GBM, we quantified phosphotyrosine-mediated signaling using mass spectrometry. Oncogenic signals, including phosphorylated ERK MAPK, PI3K, and PDGFR, were found to be increased in the murine tumors relative to brain. Phosphorylation of CDK1 pY15, associated with the G2 arrest checkpoint, was identified as the most differentially phosphorylated site, with a 14-fold increase in phosphorylation in the tumors. To assess the role of this checkpoint as a potential therapeutic target, syngeneic primary cell lines derived from these tumors were treated with MK-1775, an inhibitor of Wee1, the kinase responsible for CDK1 Y15 phosphorylation. MK-1775 treatment led to mitotic catastrophe, as defined by increased DNA damage and cell death by apoptosis. To assess the extensibility of targeting Wee1/CDK1 in GBM, patient-derived xenograft (PDX) cell lines were also treated with MK-1775. Although the response was more heterogeneous, on-target Wee1 inhibition led to decreased CDK1 Y15 phosphorylation and increased DNA damage and apoptosis in each line. These results were also validated in vivo, where single-agent MK-1775 demonstrated an antitumor effect on a flank PDX tumor model, increasing mouse survival by 1.74-fold. This study highlights the ability of unbiased quantitative phosphoproteomics to reveal therapeutic targets in tumor models, and the potential for Wee1 inhibition as a treatment approach in preclinical models of GBM. Mol Cancer Ther; 15(6); 1332-43. ©2016 AACR. PMID:27196784

  8. Rock physics models for constraining quantitative interpretation of ultrasonic data for biofilm growth and development

    NASA Astrophysics Data System (ADS)

    Alhadhrami, Fathiya Mohammed

    This study examines the use of rock physics modeling for quantitative interpretation of seismic data in the context of microbial growth and biofilm formation in unconsolidated sediment. The impetus for this research comes from geophysical experiments by Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012). These studies observed that microbial growth has a small effect on P-wave velocities (VP) but a large effect on seismic amplitudes. Davis et al. (2010) and Kwon and Ajo-Franklin et al. (2012) speculated that the amplitude variations were due to a combination of rock mechanical changes from accumulation of microbial growth related features such as biofilms. A more definite conclusion can be drawn by developing rock physics models that connect rock properties to seismic amplitudes. The primary objective of this work is to provide an explanation for high amplitude attenuation due to biofilm growth. The results suggest that biofilm formation in the Davis et al. (2010) experiment exhibit two growth styles: a loadbearing style where biofilm behaves like an additional mineral grain and a non-loadbearing mode where the biofilm grows into the pore spaces. In the loadbearing mode, the biofilms contribute to the stiffness of the sediments. We refer to this style as "filler." In the non-loadbearing mode, the biofilms contribute only to change in density of sediments without affecting their strength. We refer to this style of microbial growth as "mushroom." Both growth styles appear to be changing permeability more than the moduli or the density. As the result, while the VP velocity remains relatively unchanged, the amplitudes can change significantly depending on biofilm saturation. Interpreting seismic data from biofilm growths in term of rock physics models provide a greater insight into the sediment-fluid interaction. The models in turn can be used to understand microbial enhanced oil recovery and in assisting in solving environmental issues such as creating bio

  9. Analytical model for quantitative prediction of material contrasts in scattering-type near-field optical microscopy.

    PubMed

    Cvitkovic, A; Ocelic, N; Hillenbrand, R

    2007-07-01

    Nanometer-scale mapping of complex optical constants by scattering-type near-field microscopy has been suffering from quantitative discrepancies between the theory and experiments. To resolve this problem, a novel analytical model is presented here. The comparison with experimental data demonstrates that the model quantitatively reproduces approach curves on a Au surface and yields an unprecedented agreement with amplitude and phase spectra recorded on a phonon-polariton resonant SiC sample. The simple closed-form solution derived here should enable the determination of the local complex dielectric function on an unknown sample, thereby identifying its nanoscale chemical composition, crystal structure and conductivity. PMID:19547189

  10. Quantitative structure activity relationship model for predicting the depletion percentage of skin allergic chemical substances of glutathione.

    PubMed

    Si, Hongzong; Wang, Tao; Zhang, Kejun; Duan, Yun-Bo; Yuan, Shuping; Fu, Aiping; Hu, Zhide

    2007-05-22

    A quantitative model was developed to predict the depletion percentage of glutathione (DPG) compounds by gene expression programming (GEP). Each kind of compound was represented by several calculated structural descriptors involving constitutional, topological, geometrical, electrostatic and quantum-chemical features of compounds. The GEP method produced a nonlinear and five-descriptor quantitative model with a mean error and a correlation coefficient of 10.52 and 0.94 for the training set, 22.80 and 0.85 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones, better than those of the heuristic method. PMID:17481417

  11. Discrete dynamic modeling with asynchronous update, or how to model complex systems in the absence of quantitative information.

    PubMed

    Assmann, Sarah M; Albert, Réka

    2009-01-01

    A major aim of systems biology is the study of the inter-relationships found within and between large biological data sets. Here we describe one systems biology method, in which the tools of network analysis and discrete dynamic (Boolean) modeling are used to develop predictive models of cellular signaling in cases where detailed temporal and kinetic information regarding the propagation of the signal through the system is lacking. This approach is also applicable to data sets derived from some other types of biological systems, such as transcription factor-mediated regulation of gene expression during the control of developmental fate, or host defense responses following pathogen attack, and is equally applicable to plant and non-plant systems. The method also allows prediction of how elimination of one or more individual signaling components will affect the ultimate outcome, thus allowing the researcher to model the effects of genetic knockout or pharmacological block. The method also serves as a starting point from which more quantitative models can be developed as additional information becomes available. PMID:19588107

  12. Estimation of glacial outburst floods in Himalayan watersheds by means of quantitative modelling

    NASA Astrophysics Data System (ADS)

    Brauner, M.; Agner, P.; Vogl, A.; Leber, D.; Haeusler, H.; Wangda, D.

    2003-04-01

    In the Himalayas intense glacier retreat rates and at quickly developing settlement activity in the downstream valleys, result in dramatically increasing Glacier Lake Outburst risk. As settlement activity concentrates on broad and productive valley areas, being typically 10 to 70 kilometres downstream of the flood source, hazard awareness and preparedness is limited. Therefore application of quantitative assessment methodology is crucial in order to delineate flood prone areas and develop hazard preparedness concepts by means of scenario modelling. For dam breach back-calculation the 1D-simulation tool BREACH is utilised. Generally the initiation by surge waves and the broad sediment size spectrum of tills are difficult to implement. Therefore a tool with long application history has been chosen. The flood propagation is simulated with the 2D-hydraulic simulation model FLO2D which enables water flood and sediment load routing. In three Himalayan watersheds (Pho Chhu valley, Bhutan; Tam Pokhari valley, Nepal) recent Glacier Lake Outbursts (each with more than 20 mill m3 volume) and consecutive floods are simulated and calibrated by means of multi-time morpho-logical information, high water marks, geomorphologic interpretation and eye witness consultation. These calculations show that for these events the dam breach process was slow (within 0.75 to 3 hours), with low flood hydrographs. The flood propagation was governed by a sequence of low-sloping, depositional channel sections, and steep channel section with intense lateral sediment mobilisation and temporary blockage. This resulted in a positive feedback and prolonged the flood. By means of sensitivity analysis the influence of morphological changes during the events and the imporance of the dam breach process to the whole is estimated. It can be shown, that the accuracy of the high water limit is governed by the following processes: sediment mobilisation, breaching process, water volume, morphological changes

  13. Chronic spinal compression model in minipigs: a systematic behavioral, qualitative, and quantitative neuropathological study.

    PubMed

    Navarro, Roman; Juhas, Stefan; Keshavarzi, Sassan; Juhasova, Jana; Motlik, Jan; Johe, Karl; Marsala, Silvia; Scadeng, Miriam; Lazar, Peter; Tomori, Zoltan; Schulteis, Gery; Beattie, Michael; Ciacci, Joseph D; Marsala, Martin

    2012-02-10

    The goal of the present study was to develop a porcine spinal cord injury (SCI) model, and to describe the neurological outcome and characterize the corresponding quantitative and qualitative histological changes at 4-9 months after injury. Adult Gottingen-Minnesota minipigs were anesthetized and placed in a spine immobilization frame. The exposed T12 spinal segment was compressed in a dorso-ventral direction using a 5-mm-diameter circular bar with a progressively increasing peak force (1.5, 2.0, or 2.5 kg) at a velocity of 3 cm/sec. During recovery, motor and sensory function were periodically monitored. After survival, the animals were perfusion fixed and the extent of local SCI was analyzed by (1) post-mortem MRI analysis of dissected spinal cords, (2) qualitative and quantitative analysis of axonal survival at the epicenter of injury, and (3) defining the presence of local inflammatory changes, astrocytosis, and schwannosis. Following 2.5-kg spinal cord compression the animals demonstrated a near complete loss of motor and sensory function with no recovery over the next 4-9 months. Those that underwent spinal cord compression with 2 kg force developed an incomplete injury with progressive partial neurological recovery characterized by a restricted ability to stand and walk. Animals injured with a spinal compression force of 1.5 kg showed near normal ambulation 10 days after injury. In fully paralyzed animals (2.5 kg), MRI analysis demonstrated a loss of spinal white matter integrity and extensive septal cavitations. A significant correlation between the magnitude of loss of small and medium-sized myelinated axons in the ventral funiculus and neurological deficits was identified. These data, demonstrating stable neurological deficits in severely injured animals, similarities of spinal pathology to humans, and relatively good post-injury tolerance of this strain of minipigs to spinal trauma, suggest that this model can successfully be used to study

  14. Exploring multiple quantitative trait loci models of hepatic fibrosis in a mouse intercross.

    PubMed

    Hall, Rabea A; Hillebrandt, Sonja; Lammert, Frank

    2016-02-01

    Most common diseases are attributed to multiple genetic variants, and the feasibility of identifying inherited risk factors is often restricted to the identification of alleles with high or intermediate effect sizes. In our previous studies, we identified single loci associated with hepatic fibrosis (Hfib1-Hfib4). Recent advances in analysis tools allowed us to model loci interactions for liver fibrosis. We analysed 322 F2 progeny from an intercross of the fibrosis-susceptible strain BALB/cJ and the resistant strain FVB/NJ. The mice were challenged with carbon tetrachloride (CCl4) for 6 weeks to induce chronic hepatic injury and fibrosis. Fibrosis progression was quantified by determining histological fibrosis stages and hepatic collagen contents. Phenotypic data were correlated to genome-wide markers to identify quantitative trait loci (QTL). Thirteen susceptibility loci were identified by single and composite interval mapping, and were included in the subsequent multiple QTL model (MQM) testing. Models provided evidence for susceptibility loci with strongest association to collagen contents (chromosomes 1, 2, 8 and 13) or fibrosis stages (chromosomes 1, 2, 12 and 14). These loci contained the known fibrosis risk genes Hc, Fasl and Foxa2 and were incorporated in a fibrosis network. Interestingly the hepatic fibrosis locus on chromosome 1 (Hfib5) connects both phenotype networks, strengthening its role as a potential modifier locus. Including multiple QTL mapping to association studies adds valuable information on gene-gene interactions in experimental crosses and human cohorts. This study presents an initial step towards a refined understanding of profibrogenic gene networks. PMID:26547557

  15. Effect of platform, reference material, and quantification model on enumeration of Enterococcus by quantitative PCR methods

    EPA Science Inventory

    Quantitative polymerase chain reaction (qPCR) is increasingly being used for the quantitative detection of fecal indicator bacteria in beach water. QPCR allows for same-day health warnings, and its application is being considered as an optionn for recreational water quality testi...

  16. Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS

    PubMed Central

    Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.

    2016-01-01

    We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332

  17. A quantitative exposure model simulating human norovirus transmission during preparation of deli sandwiches.

    PubMed

    Stals, Ambroos; Jacxsens, Liesbeth; Baert, Leen; Van Coillie, Els; Uyttendaele, Mieke

    2015-03-01

    Human noroviruses (HuNoVs) are a major cause of food borne gastroenteritis worldwide. They are often transmitted via infected and shedding food handlers manipulating foods such as deli sandwiches. The presented study aimed to simulate HuNoV transmission during the preparation of deli sandwiches in a sandwich bar. A quantitative exposure model was developed by combining the GoldSim® and @Risk® software packages. Input data were collected from scientific literature and from a two week observational study performed at two sandwich bars. The model included three food handlers working during a three hour shift on a shared working surface where deli sandwiches are prepared. The model consisted of three components. The first component simulated the preparation of the deli sandwiches and contained the HuNoV reservoirs, locations within the model allowing the accumulation of NoV and the working of intervention measures. The second component covered the contamination sources being (1) the initial HuNoV contaminated lettuce used on the sandwiches and (2) HuNoV originating from a shedding food handler. The third component included four possible intervention measures to reduce HuNoV transmission: hand and surface disinfection during preparation of the sandwiches, hand gloving and hand washing after a restroom visit. A single HuNoV shedding food handler could cause mean levels of 43±18, 81±37 and 18±7 HuNoV particles present on the deli sandwiches, hands and working surfaces, respectively. Introduction of contaminated lettuce as the only source of HuNoV resulted in the presence of 6.4±0.8 and 4.3±0.4 HuNoV on the food and hand reservoirs. The inclusion of hand and surface disinfection and hand gloving as a single intervention measure was not effective in the model as only marginal reductions of HuNoV levels were noticeable in the different reservoirs. High compliance of hand washing after a restroom visit did reduce HuNoV presence substantially on all reservoirs. The

  18. Simulation of water-use conservation scenarios for the Mississippi Delta using an existing regional groundwater flow model

    USGS Publications Warehouse

    Barlow, Jeannie R.B.; Clark, Brian R.

    2011-01-01

    The Mississippi River alluvial plain in northwestern Mississippi (referred to as the Delta), once a floodplain to the Mississippi River covered with hardwoods and marshland, is now a highly productive agricultural region of large economic importance to Mississippi. Water for irrigation is supplied primarily by the Mississippi River Valley alluvial aquifer, and although the alluvial aquifer has a large reserve, there is evidence that the current rate of water use from the alluvial aquifer is not sustainable. Using an existing regional groundwater flow model, conservation scenarios were developed for the alluvial aquifer underlying the Delta region in northwestern Mississippi to assess where the implementation of water-use conservation efforts would have the greatest effect on future water availability-either uniformly throughout the Delta, or focused on a cone of depression in the alluvial aquifer underlying the central part of the Delta. Five scenarios were simulated with the Mississippi Embayment Regional Aquifer Study groundwater flow model: (1) a base scenario in which water use remained constant at 2007 rates throughout the entire simulation; (2) a 5-percent 'Delta-wide' conservation scenario in which water use across the Delta was decreased by 5 percent; (3) a 5-percent 'cone-equivalent' conservation scenario in which water use within the area of the cone of depression was decreased by 11 percent (a volume equivalent to the 5-percent Delta-wide conservation scenario); (4) a 25-percent Delta-wide conservation scenario in which water use across the Delta was decreased by 25 percent; and (5) a 25-percent cone-equivalent conservation scenario in which water use within the area of the cone of depression was decreased by 55 percent (a volume equivalent to the 25-percent Delta-wide conservation scenario). The Delta-wide scenarios result in greater average water-level improvements (relative to the base scenario) for the entire Delta area than the cone

  19. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    PubMed

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  20. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  1. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  2. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  3. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  4. Quantitative analysis of aqueous phase composition of model dentin adhesives experiencing phase separation

    PubMed Central

    Ye, Qiang; Park, Jonggu; Parthasarathy, Ranganathan; Pamatmat, Francis; Misra, Anil; Laurence, Jennifer S.; Marangos, Orestes; Spencer, Paulette

    2013-01-01

    There have been reports of the sensitivity of our current dentin adhesives to excess moisture, for example, water-blisters in adhesives placed on over-wet surfaces, and phase separation with concomitant limited infiltration of the critical dimethacrylate component into the demineralized dentin matrix. To determine quantitatively the hydrophobic/hydrophilic components in the aqueous phase when exposed to over-wet environments, model adhesives were mixed with 16, 33, and 50 wt % water to yield well-separated phases. Based upon high-performance liquid chromatography coupled with photodiode array detection, it was found that the amounts of hydrophobic BisGMA and hydrophobic initiators are less than 0.1 wt % in the aqueous phase. The amount of these compounds decreased with an increase in the initial water content. The major components of the aqueous phase were hydroxyethyl methacrylate (HEMA) and water, and the HEMA content ranged from 18.3 to 14.7 wt %. Different BisGMA homologues and the relative content of these homologues in the aqueous phase have been identified; however, the amount of crosslinkable BisGMA was minimal and, thus, could not help in the formation of a crosslinked polymer network in the aqueous phase. Without the protection afforded by a strong crosslinked network, the poorly photoreactive compounds of this aqueous phase could be leached easily. These results suggest that adhesive formulations should be designed to include hydrophilic multimethacrylate monomers and water compatible initiators. PMID:22331596

  5. Assessing the toxic effects of ethylene glycol ethers using Quantitative Structure Toxicity Relationship models

    SciTech Connect

    Ruiz, Patricia; Mumtaz, Moiz; Gombar, Vijay

    2011-07-15

    Experimental determination of toxicity profiles consumes a great deal of time, money, and other resources. Consequently, businesses, societies, and regulators strive for reliable alternatives such as Quantitative Structure Toxicity Relationship (QSTR) models to fill gaps in toxicity profiles of compounds of concern to human health. The use of glycol ethers and their health effects have recently attracted the attention of international organizations such as the World Health Organization (WHO). The board members of Concise International Chemical Assessment Documents (CICAD) recently identified inadequate testing as well as gaps in toxicity profiles of ethylene glycol mono-n-alkyl ethers (EGEs). The CICAD board requested the ATSDR Computational Toxicology and Methods Development Laboratory to conduct QSTR assessments of certain specific toxicity endpoints for these chemicals. In order to evaluate the potential health effects of EGEs, CICAD proposed a critical QSTR analysis of the mutagenicity, carcinogenicity, and developmental effects of EGEs and other selected chemicals. We report here results of the application of QSTRs to assess rodent carcinogenicity, mutagenicity, and developmental toxicity of four EGEs: 2-methoxyethanol, 2-ethoxyethanol, 2-propoxyethanol, and 2-butoxyethanol and their metabolites. Neither mutagenicity nor carcinogenicity is indicated for the parent compounds, but these compounds are predicted to be developmental toxicants. The predicted toxicity effects were subjected to reverse QSTR (rQSTR) analysis to identify structural attributes that may be the main drivers of the developmental toxicity potential of these compounds.

  6. Quantitative constraint-based computational model of tumor-to-stroma coupling via lactate shuttle.

    PubMed

    Capuani, Fabrizio; De Martino, Daniele; Marinari, Enzo; De Martino, Andrea

    2015-01-01

    Cancer cells utilize large amounts of ATP to sustain growth, relying primarily on non-oxidative, fermentative pathways for its production. In many types of cancers this leads, even in the presence of oxygen, to the secretion of carbon equivalents (usually in the form of lactate) in the cell's surroundings, a feature known as the Warburg effect. While the molecular basis of this phenomenon are still to be elucidated, it is clear that the spilling of energy resources contributes to creating a peculiar microenvironment for tumors, possibly characterized by a degree of toxicity. This suggests that mechanisms for recycling the fermentation products (e.g. a lactate shuttle) may be active, effectively inducing a mutually beneficial metabolic coupling between aberrant and non-aberrant cells. Here we analyze this scenario through a large-scale in silico metabolic model of interacting human cells. By going beyond the cell-autonomous description, we show that elementary physico-chemical constraints indeed favor the establishment of such a coupling under very broad conditions. The characterization we obtained by tuning the aberrant cell's demand for ATP, amino-acids and fatty acids and/or the imbalance in nutrient partitioning provides quantitative support to the idea that synergistic multi-cell effects play a central role in cancer sustainment. PMID:26149467

  7. Quantitative constraint-based computational model of tumor-to-stroma coupling via lactate shuttle

    PubMed Central

    Capuani, Fabrizio; De Martino, Daniele; Marinari, Enzo; De Martino, Andrea

    2015-01-01

    Cancer cells utilize large amounts of ATP to sustain growth, relying primarily on non-oxidative, fermentative pathways for its production. In many types of cancers this leads, even in the presence of oxygen, to the secretion of carbon equivalents (usually in the form of lactate) in the cell’s surroundings, a feature known as the Warburg effect. While the molecular basis of this phenomenon are still to be elucidated, it is clear that the spilling of energy resources contributes to creating a peculiar microenvironment for tumors, possibly characterized by a degree of toxicity. This suggests that mechanisms for recycling the fermentation products (e.g. a lactate shuttle) may be active, effectively inducing a mutually beneficial metabolic coupling between aberrant and non-aberrant cells. Here we analyze this scenario through a large-scale in silico metabolic model of interacting human cells. By going beyond the cell-autonomous description, we show that elementary physico-chemical constraints indeed favor the establishment of such a coupling under very broad conditions. The characterization we obtained by tuning the aberrant cell’s demand for ATP, amino-acids and fatty acids and/or the imbalance in nutrient partitioning provides quantitative support to the idea that synergistic multi-cell effects play a central role in cancer sustainment. PMID:26149467

  8. Quantitative studies of animal colour constancy: using the chicken as model.

    PubMed

    Olsson, Peter; Wilby, David; Kelber, Almut

    2016-05-11

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations. PMID:27170714

  9. Quantitation and pharmacokinetic modeling of therapeutic antibody quality attributes in human studies.

    PubMed

    Li, Yinyin; Monine, Michael; Huang, Yu; Swann, Patrick; Nestorov, Ivan; Lyubarskaya, Yelena

    2016-01-01

    A thorough understanding of drug metabolism and disposition can aid in the assessment of efficacy and safety. However, analytical methods used in pharmacokinetics (PK) studies of protein therapeutics are usually based on ELISA, and therefore can provide a limited perspective on the quality of the drug in concentration measurements. Individual post-translational modifications (PTMs) of protein therapeutics are rarely considered for PK analysis, partly because it is technically difficult to recover and quantify individual protein variants from biological fluids. Meanwhile, PTMs may be directly linked to variations in drug efficacy and safety, and therefore understanding of clearance and metabolism of biopharmaceutical protein variants during clinical studies is an important consideration. To address such challenges, we developed an affinity-purification procedure followed by peptide mapping with mass spectrometric detection, which can profile multiple quality attributes of therapeutic antibodies recovered from patient sera. The obtained data enable quantitative modeling, which allows for simulation of the PK of different individual PTMs or attribute levels in vivo and thus facilitate the assessment of quality attributes impact in vivo. Such information can contribute to the product quality attribute risk assessment during manufacturing process development and inform appropriate process control strategy. PMID:27216574

  10. Model-independent quantitative measurement of nanomechanical oscillator vibrations using electron-microscope linescans

    NASA Astrophysics Data System (ADS)

    Wang, Huan; Fenton, J. C.; Chiatti, O.; Warburton, P. A.

    2013-07-01

    Nanoscale mechanical resonators are highly sensitive devices and, therefore, for application as highly sensitive mass balances, they are potentially superior to micromachined cantilevers. The absolute measurement of nanoscale displacements of such resonators remains a challenge, however, since the optical signal reflected from a cantilever whose dimensions are sub-wavelength is at best very weak. We describe a technique for quantitative analysis and fitting of scanning-electron microscope (SEM) linescans across a cantilever resonator, involving deconvolution from the vibrating resonator profile using the stationary resonator profile. This enables determination of the absolute amplitude of nanomechanical cantilever oscillations even when the oscillation amplitude is much smaller than the cantilever width. This technique is independent of any model of secondary-electron emission from the resonator and is, therefore, applicable to resonators with arbitrary geometry and material inhomogeneity. We demonstrate the technique using focussed-ion-beam-deposited tungsten cantilevers of radius ˜60-170 nm inside a field-emission SEM, with excitation of the cantilever by a piezoelectric actuator allowing measurement of the full frequency response. Oscillation amplitudes approaching the size of the primary electron-beam can be resolved. We further show that the optimum electron-beam scan speed is determined by a compromise between deflection of the cantilever at low scan speeds and limited spatial resolution at high scan speeds. Our technique will be an important tool for use in precise characterization of nanomechanical resonator devices.

  11. Model-independent quantitative measurement of nanomechanical oscillator vibrations using electron-microscope linescans.

    PubMed

    Wang, Huan; Fenton, J C; Chiatti, O; Warburton, P A

    2013-07-01

    Nanoscale mechanical resonators are highly sensitive devices and, therefore, for application as highly sensitive mass balances, they are potentially superior to micromachined cantilevers. The absolute measurement of nanoscale displacements of such resonators remains a challenge, however, since the optical signal reflected from a cantilever whose dimensions are sub-wavelength is at best very weak. We describe a technique for quantitative analysis and fitting of scanning-electron microscope (SEM) linescans across a cantilever resonator, involving deconvolution from the vibrating resonator profile using the stationary resonator profile. This enables determination of the absolute amplitude of nanomechanical cantilever oscillations even when the oscillation amplitude is much smaller than the cantilever width. This technique is independent of any model of secondary-electron emission from the resonator and is, therefore, applicable to resonators with arbitrary geometry and material inhomogeneity. We demonstrate the technique using focussed-ion-beam-deposited tungsten cantilevers of radius ~60-170 nm inside a field-emission SEM, with excitation of the cantilever by a piezoelectric actuator allowing measurement of the full frequency response. Oscillation amplitudes approaching the size of the primary electron-beam can be resolved. We further show that the optimum electron-beam scan speed is determined by a compromise between deflection of the cantilever at low scan speeds and limited spatial resolution at high scan speeds. Our technique will be an important tool for use in precise characterization of nanomechanical resonator devices. PMID:23902094

  12. Model-independent quantitative measurement of nanomechanical oscillator vibrations using electron-microscope linescans

    SciTech Connect

    Wang, Huan; Fenton, J. C.; Chiatti, O.; Warburton, P. A.

    2013-07-15

    Nanoscale mechanical resonators are highly sensitive devices and, therefore, for application as highly sensitive mass balances, they are potentially superior to micromachined cantilevers. The absolute measurement of nanoscale displacements of such resonators remains a challenge, however, since the optical signal reflected from a cantilever whose dimensions are sub-wavelength is at best very weak. We describe a technique for quantitative analysis and fitting of scanning-electron microscope (SEM) linescans across a cantilever resonator, involving deconvolution from the vibrating resonator profile using the stationary resonator profile. This enables determination of the absolute amplitude of nanomechanical cantilever oscillations even when the oscillation amplitude is much smaller than the cantilever width. This technique is independent of any model of secondary-electron emission from the resonator and is, therefore, applicable to resonators with arbitrary geometry and material inhomogeneity. We demonstrate the technique using focussed-ion-beam–deposited tungsten cantilevers of radius ∼60–170 nm inside a field-emission SEM, with excitation of the cantilever by a piezoelectric actuator allowing measurement of the full frequency response. Oscillation amplitudes approaching the size of the primary electron-beam can be resolved. We further show that the optimum electron-beam scan speed is determined by a compromise between deflection of the cantilever at low scan speeds and limited spatial resolution at high scan speeds. Our technique will be an important tool for use in precise characterization of nanomechanical resonator devices.

  13. Quantitative Profiling of Brain Lipid Raft Proteome in a Mouse Model of Fragile X Syndrome

    PubMed Central

    Kalinowska, Magdalena; Castillo, Catherine; Francesconi, Anna

    2015-01-01

    Fragile X Syndrome, a leading cause of inherited intellectual disability and autism, arises from transcriptional silencing of the FMR1 gene encoding an RNA-binding protein, Fragile X Mental Retardation Protein (FMRP). FMRP can regulate the expression of approximately 4% of brain transcripts through its role in regulation of mRNA transport, stability and translation, thus providing a molecular rationale for its potential pleiotropic effects on neuronal and brain circuitry function. Several intracellular signaling pathways are dysregulated in the absence of FMRP suggesting that cellular deficits may be broad and could result in homeostatic changes. Lipid rafts are specialized regions of the plasma membrane, enriched in cholesterol and glycosphingolipids, involved in regulation of intracellular signaling. Among transcripts targeted by FMRP, a subset encodes proteins involved in lipid biosynthesis and homeostasis, dysregulation of which could affect the integrity and function of lipid rafts. Using a quantitative mass spectrometry-based approach we analyzed the lipid raft proteome of Fmr1 knockout mice, an animal model of Fragile X syndrome, and identified candidate proteins that are differentially represented in Fmr1 knockout mice lipid rafts. Furthermore, network analysis of these candidate proteins reveals connectivity between them and predicts functional connectivity with genes encoding components of myelin sheath, axonal processes and growth cones. Our findings provide insight to aid identification of molecular and cellular dysfunctions arising from Fmr1 silencing and for uncovering shared pathologies between Fragile X syndrome and other autism spectrum disorders. PMID:25849048

  14. Modeling development and quantitative trait mapping reveal independent genetic modules for leaf size and shape.

    PubMed

    Baker, Robert L; Leong, Wen Fung; Brock, Marcus T; Markelz, R J Cody; Covington, Michael F; Devisetty, Upendra K; Edwards, Christine E; Maloof, Julin; Welch, Stephen; Weinig, Cynthia

    2015-10-01

    Improved predictions of fitness and yield may be obtained by characterizing the genetic controls and environmental dependencies of organismal ontogeny. Elucidating the shape of growth curves may reveal novel genetic controls that single-time-point (STP) analyses do not because, in theory, infinite numbers of growth curves can result in the same final measurement. We measured leaf lengths and widths in Brassica rapa recombinant inbred lines (RILs) throughout ontogeny. We modeled leaf growth and allometry as function valued traits (FVT), and examined genetic correlations between these traits and aspects of phenology, physiology, circadian rhythms and fitness. We used RNA-seq to construct a SNP linkage map and mapped trait quantitative trait loci (QTL). We found genetic trade-offs between leaf size and growth rate FVT and uncovered differences in genotypic and QTL correlations involving FVT vs STPs. We identified leaf shape (allometry) as a genetic module independent of length and width and identified selection on FVT parameters of development. Leaf shape is associated with venation features that affect desiccation resistance. The genetic independence of leaf shape from other leaf traits may therefore enable crop optimization in leaf shape without negative effects on traits such as size, growth rate, duration or gas exchange. PMID:26083847

  15. Environmental determinants of tropical forest and savanna distribution: A quantitative model evaluation and its implication

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenzhong; Chen, Anping; Piao, Shilong; Rabin, Sam; Shen, Zehao

    2014-07-01

    The distributions of tropical ecosystems are rapidly being altered by climate change and anthropogenic activities. One possible trend—the loss of tropical forests and replacement by savannas—could result in significant shifts in ecosystem services and biodiversity loss. However, the influence and the relative importance of environmental factors in regulating the distribution of tropical forest and savanna biomes are still poorly understood, which makes it difficult to predict future tropical forest and savanna distributions in the context of climate change. Here we use boosted regression trees to quantitatively evaluate the importance of environmental predictors—mainly climatic, edaphic, and fire factors—for the tropical forest-savanna distribution at a mesoscale across the tropics (between 15°N and 35°S). Our results demonstrate that climate alone can explain most of the distribution of tropical forest and savanna at the scale considered; dry season average precipitation is the single most important determinant across tropical Asia-Australia, Africa, and South America. Given the strong tendency of increased seasonality and decreased dry season precipitation predicted by global climate models, we estimate that about 28% of what is now tropical forest would likely be lost to savanna by the late 21st century under the future scenario considered. This study highlights the importance of climate seasonality and interannual variability in predicting the distribution of tropical forest and savanna, supporting the climate as the primary driver in the savanna biogeography.

  16. Quantitation and pharmacokinetic modeling of therapeutic antibody quality attributes in human studies

    PubMed Central

    Li, Yinyin; Monine, Michael; Huang, Yu; Swann, Patrick; Nestorov, Ivan; Lyubarskaya, Yelena

    2016-01-01

    ABSTRACT A thorough understanding of drug metabolism and disposition can aid in the assessment of efficacy and safety. However, analytical methods used in pharmacokinetics (PK) studies of protein therapeutics are usually based on ELISA, and therefore can provide a limited perspective on the quality of the drug in concentration measurements. Individual post-translational modifications (PTMs) of protein therapeutics are rarely considered for PK analysis, partly because it is technically difficult to recover and quantify individual protein variants from biological fluids. Meanwhile, PTMs may be directly linked to variations in drug efficacy and safety, and therefore understanding of clearance and metabolism of biopharmaceutical protein variants during clinical studies is an important consideration. To address such challenges, we developed an affinity-purification procedure followed by peptide mapping with mass spectrometric detection, which can profile multiple quality attributes of therapeutic antibodies recovered from patient sera. The obtained data enable quantitative modeling, which allows for simulation of the PK of different individual PTMs or attribute levels in vivo and thus facilitate the assessment of quality attributes impact in vivo. Such information can contribute to the product quality attribute risk assessment during manufacturing process development and inform appropriate process control strategy. PMID:27216574

  17. Toxicity mechanisms of the food contaminant citrinin: application of a quantitative yeast model.

    PubMed

    Pascual-Ahuir, Amparo; Vanacloig-Pedros, Elena; Proft, Markus

    2014-05-01

    Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifically, oxidative stress responsive pathways via the transcription factors Yap1 and Skn7 are critically implied in the response to citrinin. Additionally, genes in various multidrug resistance transport systems are functionally involved in the resistance to citrinin. Our study identifies the antioxidant defense as a major physiological response in the case of citrinin. In general, our results show that the use of live cell gene expression reporters in yeast are a powerful tool to identify toxicity targets and detoxification mechanisms of a broad range of food contaminants relevant for human nutrition. PMID:24858409

  18. A quantitative risk assessment model to evaluate effective border control measures for rabies prevention.

    PubMed

    Weng, Hsin-Yi; Wu, Pei-I; Yang, Ping-Cheng; Tsai, Yi-Lun; Chang, Chao-Chin

    2010-01-01

    Border control is the primary method to prevent rabies emergence. This study developed a quantitative risk model incorporating stochastic processes to evaluate whether border control measures could efficiently prevent rabies introduction through importation of cats and dogs using Taiwan as an example. Both legal importation and illegal smuggling were investigated. The impacts of reduced quarantine and/or waiting period on the risk of rabies introduction were also evaluated. The results showed that Taiwan's current animal importation policy could effectively prevent rabies introduction through legal importation of cats and dogs. The median risk of a rabid animal to penetrate current border control measures and enter Taiwan was 5.33 x 10(-8) (95th percentile: 3.20 x 10(-7)). However, illegal smuggling may pose Taiwan to the great risk of rabies emergence. Reduction of quarantine and/or waiting period would affect the risk differently, depending on the applied assumptions, such as increased vaccination coverage, enforced custom checking, and/or change in number of legal importations. Although the changes in the estimated risk under the assumed alternatives were not substantial except for completely abolishing quarantine, the consequences of rabies introduction may yet be considered to be significant in a rabies-free area. Therefore, a comprehensive benefit-cost analysis needs to be conducted before recommending these alternative measures. PMID:19822125

  19. A quantitative microbiological exposure assessment model for Bacillus cereus in REPFEDs.

    PubMed

    Daelman, Jeff; Membré, Jeanne-Marie; Jacxsens, Liesbeth; Vermeulen, An; Devlieghere, Frank; Uyttendaele, Mieke

    2013-09-16

    One of the pathogens of concern in refrigerated and processed foods of extended durability (REPFED) is psychrotrophic Bacillus cereus, because of its ability to survive pasteurisation and grow at low temperatures. In this study a quantitative microbiological exposure assessment (QMEA) of psychrotrophic B. cereus in REPFEDs is presented. The goal is to quantify (i) the prevalence and concentration of B. cereus during production and shelf life, (ii) the number of packages with potential emetic toxin formation and (iii) the impact of different processing steps and consumer behaviour on the exposure to B. cereus from REPFEDs. The QMEA comprises the entire production and distribution process, from raw materials over pasteurisation and up to the moment it is consumed or discarded. To model this process the modular process risk model (MPRM) was used (Nauta, 2002). The product life was divided into nine modules, each module corresponding to a basic process: (1) raw material contamination, (2) cross contamination during handling, (3) inactivation during preparation, (4) growth during intermediate storage, (5) partitioning of batches in portions, (6) mixing portions to create the product, (7) recontamination during assembly and packaging, (8) inactivation during pasteurisation and (9) growth during shelf life. Each of the modules was modelled and built using a combination of newly gathered and literature data, predictive models and expert opinions. Units (batch/portion/package) with a B. cereus concentration of 10(5)CFU/g or more were considered 'risky' units. Results show that the main drivers of variability and uncertainty are consumer behaviour, strain variability and modelling error. The prevalence of B. cereus in the final products is estimated at 48.6% (±0.01%) and the number of packs with too high B. cereus counts at the moment of consumption is estimated at 4750 packs per million (0.48%). Cold storage at retail and consumer level is vital in limiting the exposure

  20. Automatic and Quantitative Measurement of Collagen Gel Contraction Using Model-Guided Segmentation.

    PubMed

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R; Zhao, Chunfeng; Amadio, Peter C; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behaviors and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model (DCM) which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods. PMID:24092954

  1. HomoSAR: bridging comparative protein modeling with quantitative structural activity relationship to design new peptides.

    PubMed

    Borkar, Mahesh R; Pissurlenkar, Raghuvir R S; Coutinho, Evans C

    2013-11-15

    Peptides play significant roles in the biological world. To optimize activity for a specific therapeutic target, peptide library synthesis is inevitable; which is a time consuming and expensive. Computational approaches provide a promising way to simply elucidate the structural basis in the design of new peptides. Earlier, we proposed a novel methodology termed HomoSAR to gain insight into the structure activity relationships underlying peptides. Based on an integrated approach, HomoSAR uses the principles of homology modeling in conjunction with the quantitative structural activity relationship formalism to predict and design new peptide sequences with the optimum activity. In the present study, we establish that the HomoSAR methodology can be universally applied to all classes of peptides irrespective of sequence length by studying HomoSAR on three peptide datasets viz., angiotensin-converting enzyme inhibitory peptides, CAMEL-s antibiotic peptides, and hAmphiphysin-1 SH3 domain binding peptides, using a set of descriptors related to the hydrophobic, steric, and electronic properties of the 20 natural amino acids. Models generated for all three datasets have statistically significant correlation coefficients (r(2)) and predictive r2 (r(pred)2) and cross validated coefficient ( q(LOO)2). The daintiness of this technique lies in its simplicity and ability to extract all the information contained in the peptides to elucidate the underlying structure activity relationships. The difficulties of correlating both sequence diversity and variation in length of the peptides with their biological activity can be addressed. The study has been able to identify the preferred or detrimental nature of amino acids at specific positions in the peptide sequences. PMID:24105965

  2. Early events in cell spreading as a model for quantitative analysis of biomechanical events.

    PubMed

    Wolfenson, Haguy; Iskratsch, Thomas; Sheetz, Michael P

    2014-12-01

    In this review, we focus on the early events in the process of fibroblast spreading on fibronectin matrices of different rigidities. We present a focused position piece that illustrates the many different tests that a cell makes of its environment before it establishes mature matrix adhesions. When a fibroblast is placed on fibronectin-coated glass surfaces at 37°C, it typically spreads and polarizes within 20-40 min primarily through αvβ3 integrin binding to fibronectin. In that short period, the cell goes through three major phases that involve binding, integrin activation, spreading, and mechanical testing of the surface. The advantage of using the model system of cell spreading from the unattached state is that it is highly reproducible and the stages that the cell undergoes can thus be studied in a highly quantitative manner, in both space and time. The mechanical and biochemical parameters that matter in this example are often surprising because of both the large number of tests that occur and the precision of the tests. We discuss our current understanding of those tests, the decision tree that is involved in this process, and an extension to the behavior of the cells at longer time periods when mature adhesions develop. Because many other matrices and integrins are involved in cell-matrix adhesion, this model system gives us a limited view of a subset of cellular behaviors that can occur. However, by defining one cellular process at a molecular level, we know more of what to expect when defining other processes. Because each cellular process will involve some different proteins, a molecular understanding of multiple functions operating within a given cell can lead to strategies to selectively block a function. PMID:25468330

  3. Elucidating Quantitative Stability/Flexibility Relationships Within Thioredoxin and its Fragments Using a Distance Constraint Model

    PubMed Central

    Jacobs, Donald J.; Livesay, Dennis R.; Hules, Jeremy; Tasayco, Maria Luisa

    2015-01-01

    Numerous quantitative stability/flexibility relationships, within Escherichia coli thioredoxin (Trx) and its fragments are determined using a minimal distance constraint model (DCM). A one-dimensional free energy landscape as a function of global flexibility reveals Trx to fold in a low-barrier two-state process, with a voluminous transition state. Near the folding transition temperature, the native free energy basin is markedly skewed to allow partial unfolded forms. Under native conditions the skewed shape is lost, and the protein forms a compact structure with some flexibility. Predictions on ten Trx fragments are generally consistent with experimental observations that they are disordered, and that complementary fragments reconstitute. A hierarchical unfolding pathway is uncovered using an exhaustive computational procedure of breaking interfacial cross-linking hydrogen bonds that span over a series of fragment dissociations. The unfolding pathway leads to a stable core structure (residues 22–90), predicted to act as a kinetic trap. Direct connection between degree of rigidity within molecular structure and non-additivity of free energy is demonstrated using a thermodynamic cycle involving fragments and their hierarchical unfolding pathway. Additionally, the model provides insight about molecular cooperativity within Trx in its native state, and about intermediate states populating the folding/unfolding pathways. Native state cooperativity correlation plots highlight several flexibly correlated regions, giving insight into the catalytic mechanism that facilitates access to the active site disulfide bond. Residual native cooperativity correlations are present in the core substructure, suggesting that Trx can function when it is partly unfolded. This natively disordered kinetic trap, interpreted as a molten globule, has a wide temperature range of metastability, and it is identified as the “slow intermediate state” observed in kinetic experiments. These

  4. Cortical neuron activation induced by electromagnetic stimulation: a quantitative analysis via modelling and simulation.

    PubMed

    Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping

    2016-02-01

    Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field. PMID:26719168

  5. Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models

    NASA Astrophysics Data System (ADS)

    Ramli, Huda Mohd.; Esler, J. Gavin

    2016-07-01

    A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.

  6. Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions

    NASA Astrophysics Data System (ADS)

    Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu

    2014-04-01

    Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics.

  7. MODIS volcanic ash retrievals vs FALL3D transport model: a quantitative comparison

    NASA Astrophysics Data System (ADS)

    Corradini, S.; Merucci, L.; Folch, A.

    2010-12-01

    Satellite retrievals and transport models represents the key tools to monitor the volcanic clouds evolution. Because of the harming effects of fine ash particles on aircrafts, the real-time tracking and forecasting of volcanic clouds is key for aviation safety. Together with the security reasons also the economical consequences of a disruption of airports must be taken into account. The airport closures due to the recent Icelandic Eyjafjöll eruption caused millions of passengers to be stranded not only in Europe, but across the world. IATA (the International Air Transport Association) estimates that the worldwide airline industry has lost a total of about 2.5 billion of Euro during the disruption. Both security and economical issues require reliable and robust ash cloud retrievals and trajectory forecasting. The intercomparison between remote sensing and modeling is required to assure precise and reliable volcanic ash products. In this work we perform a quantitative comparison between Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of volcanic ash cloud mass and Aerosol Optical Depth (AOD) with the FALL3D ash dispersal model. MODIS, aboard the NASA-Terra and NASA-Aqua polar satellites, is a multispectral instrument with 36 spectral bands operating in the VIS-TIR spectral range and spatial resolution varying between 250 and 1000 m at nadir. The MODIS channels centered around 11 and 12 micron have been used for the ash retrievals through the Brightness Temperature Difference algorithm and MODTRAN simulations. FALL3D is a 3-D time-dependent Eulerian model for the transport and deposition of volcanic particles that outputs, among other variables, cloud column mass and AOD. Three MODIS images collected the October 28, 29 and 30 on Mt. Etna volcano during the 2002 eruption have been considered as test cases. The results show a general good agreement between the retrieved and the modeled volcanic clouds in the first 300 km from the vents. Even if the

  8. Aerobic biodegradation of chlorinated ethenes in a fractured bedrock aquifer: quantitative assessment by compound-specific isotope analysis (CSIA) and reactive transport modeling.

    PubMed

    Pooley, Kathryn E; Blessing, Michaela; Schmidt, Torsten C; Haderlein, Stefan B; Macquarrie, Kerry T B; Prommer, Henning

    2009-10-01

    A model-based analysis of concentration and isotope data was carried out to assess natural attenuation of chlorinated ethenes in an aerobic fractured bedrock aquifer. Tetrachloroethene (PCE) concentrations decreased downgradient of the source, but constant delta13C signatures indicated the absence of PCE degradation. In contrast, geochemical and isotopic data demonstrated degradation of trichloroethene (TCE) and cis-1,2-dichloroethene (DCE) under the prevailing oxic conditions. Numerical modeling was employed to simulate isotopic enrichment of chlorinated ethenes and to evaluate alternative degradation pathway scenarios. Existing field information on groundwater flow, solute transport, geochemistry, and delta13C signatures of the chlorinated ethenes was integrated via reactive transport simulations. The results provided strong evidence for the occurrence of aerobic TCE and DCE degradation. The chlorinated ethene concentrations together with stable carbon isotope data allowed us to reliably constrain the assessment of the extent of biodegradation at the site and plume simulations quantitatively linked aerobic biodegradation with isotope signatures in the field. Our investigation provides the first quantitative assessment of aerobic biodegradation of chlorinated ethenes in a fractured rock aquifer based on compound specific stable isotope measurements and reactive transport modeling. PMID:19848161

  9. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors

    PubMed Central

    Shuryak, Igor; Dadachova, Ekaterina

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental

  10. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors.

    PubMed

    Shuryak, Igor; Dadachova, Ekaterina

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental

  11. Quantitative Structure-Property Relationship (QSPR) Modeling of Drug-Loaded Polymeric Micelles via Genetic Function Approximation

    PubMed Central

    Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan

    2015-01-01

    Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments. PMID:25780923

  12. Phenotypic T Cell Exhaustion in a Murine Model of Bacterial Infection in the Setting of Pre-Existing Malignancy

    PubMed Central

    Mittal, Rohit; Wagener, Maylene; Breed, Elise R.; Liang, Zhe; Yoseph, Benyam P.; Burd, Eileen M.; Farris, Alton B.

    2014-01-01

    While much of cancer immunology research has focused on anti-tumor immunity both systemically and within the tumor microenvironment, little is known about the impact of pre-existing malignancy on pathogen-specific immune responses. Here, we sought to characterize the antigen-specific CD8+ T cell response following a bacterial infection in the setting of pre-existing pancreatic adenocarcinoma. Mice with established subcutaneous pancreatic adenocarcinomas were infected with Listeria monocytogenes, and antigen-specific CD8+ T cell responses were compared to those in control mice without cancer. While the kinetics and magnitude of antigen-specific CD8+ T cell expansion and accumulation was comparable between the cancer and non-cancer groups, bacterial antigen-specific CD8+ T cells and total CD4+ and CD8+ T cells in cancer mice exhibited increased expression of the coinhibitory receptors BTLA, PD-1, and 2B4. Furthermore, increased inhibitory receptor expression was associated with reduced IFN-γ and increased IL-2 production by bacterial antigen-specific CD8+ T cells in the cancer group. Taken together, these data suggest that cancer's immune suppressive effects are not limited to the tumor microenvironment, but that pre-existing malignancy induces phenotypic exhaustion in T cells by increasing expression of coinhibitory receptors and may impair pathogen-specific CD8+ T cell functionality and differentiation. PMID:24796533

  13. Phenotypic T cell exhaustion in a murine model of bacterial infection in the setting of pre-existing malignancy.

    PubMed

    Mittal, Rohit; Wagener, Maylene; Breed, Elise R; Liang, Zhe; Yoseph, Benyam P; Burd, Eileen M; Farris, Alton B; Coopersmith, Craig M; Ford, Mandy L

    2014-01-01

    While much of cancer immunology research has focused on anti-tumor immunity both systemically and within the tumor microenvironment, little is known about the impact of pre-existing malignancy on pathogen-specific immune responses. Here, we sought to characterize the antigen-specific CD8+ T cell response following a bacterial infection in the setting of pre-existing pancreatic adenocarcinoma. Mice with established subcutaneous pancreatic adenocarcinomas were infected with Listeria monocytogenes, and antigen-specific CD8+ T cell responses were compared to those in control mice without cancer. While the kinetics and magnitude of antigen-specific CD8+ T cell expansion and accumulation was comparable between the cancer and non-cancer groups, bacterial antigen-specific CD8+ T cells and total CD4+ and CD8+ T cells in cancer mice exhibited increased expression of the coinhibitory receptors BTLA, PD-1, and 2B4. Furthermore, increased inhibitory receptor expression was associated with reduced IFN-γ and increased IL-2 production by bacterial antigen-specific CD8+ T cells in the cancer group. Taken together, these data suggest that cancer's immune suppressive effects are not limited to the tumor microenvironment, but that pre-existing malignancy induces phenotypic exhaustion in T cells by increasing expression of coinhibitory receptors and may impair pathogen-specific CD8+ T cell functionality and differentiation. PMID:24796533

  14. A bibliography of terrain modeling (geomorphometry), the quantitative representation of topography: supplement 4.0

    USGS Publications Warehouse

    Pike, Richard J.

    2002-01-01

    Terrain modeling, the practice of ground-surface quantification, is an amalgam of Earth science, mathematics, engineering, and computer science. The discipline is known variously as geomorphometry (or simply morphometry), terrain analysis, and quantitative geomorphology. It continues to grow through myriad applications to hydrology, geohazards mapping, tectonics, sea-floor and planetary exploration, and other fields. Dating nominally to the co-founders of academic geography, Alexander von Humboldt (1808, 1817) and Carl Ritter (1826, 1828), the field was revolutionized late in the 20th Century by the computer manipulation of spatial arrays of terrain heights, or digital elevation models (DEMs), which can quantify and portray ground-surface form over large areas (Maune, 2001). Morphometric procedures are implemented routinely by commercial geographic information systems (GIS) as well as specialized software (Harvey and Eash, 1996; Köthe and others, 1996; ESRI, 1997; Drzewiecki et al., 1999; Dikau and Saurer, 1999; Djokic and Maidment, 2000; Wilson and Gallant, 2000; Breuer, 2001; Guth, 2001; Eastman, 2002). The new Earth Surface edition of the Journal of Geophysical Research, specializing in surficial processes, is the latest of many publication venues for terrain modeling. This is the fourth update of a bibliography and introduction to terrain modeling (Pike, 1993, 1995, 1996, 1999) designed to collect the diverse, scattered literature on surface measurement as a resource for the research community. The use of DEMs in science and technology continues to accelerate and diversify (Pike, 2000a). New work appears so frequently that a sampling must suffice to represent the vast literature. This report adds 1636 entries to the 4374 in the four earlier publications1. Forty-eight additional entries correct dead Internet links and other errors found in the prior listings. Chronicling the history of terrain modeling, many entries in this report predate the 1999 supplement

  15. Statistical Modeling Approach to Quantitative Analysis of Interobserver Variability in Breast Contouring

    SciTech Connect

    Yang, Jinzhong; Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A.; Zhang, Lifei; Balter, Peter; Court, Laurence E.; Li, X. Allen; Dong, Lei

    2014-05-01

    Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively and to

  16. A quantitative model of the phase behavior of recombinant pH-responsive elastin-like polypeptides

    PubMed Central

    MacKay, J. Andrew; Callahan, Daniel J.; FitzGerald, Kelly N.; Chilkoti, Ashutosh

    2010-01-01

    Quantitative models are required to engineer biomaterials with environmentally responsive properties. With this goal in mind, we developed a model that describes the pH dependent phase behavior of a class of stimulus responsive elastin-like polypeptides (ELPs) that undergo reversible phase separation in response to their solution environment. Under isothermal conditions, charged ELPs can undergo phase separation when their charge is neutralized. Optimization of this behavior has been challenging because the pH at which they phase separate, pHt, depends on their composition, molecular weight, concentration, and temperature. To address this problem, we developed a quantitative model to describe the phase behavior of charged ELPs that uses the Henderson-Hasselbalch relationship to describe the effect of side-chain ionization on the phase transition temperature of an ELP. The model was validated with pH-responsive ELPs that contained either acidic (Glu) or basic (His) residues. The phase separation of both ELPs fit this model across a range of pH. These results have important implications for applications of pH-responsive elastin-like polypeptides, because they provide a quantitative model for the rational design of pH responsive polypeptides whose transition can be triggered at a specified pH. PMID:20925333

  17. Break-up of Gondwana and opening of the South Atlantic: Review of existing plate tectonic models

    USGS Publications Warehouse

    Ghidella, M.E.; Lawver, L.A.; Gahagan, L.M.

    2007-01-01

    each model. We also plot reconstructions at four selected epochs for all models using the same projection and scale to facilitate comparison. The diverse simplifying assumptions that need to be made in every case regarding plate fragmentation to account for the numerous syn-rift basins and periods of stretching are strong indicators that rigid plate tectonics is too simple a model for the present problem.

  18. Quantitative hazard assessment at Vulcano (Aeolian islands): integration of geology, event statistics and physical modelling

    NASA Astrophysics Data System (ADS)

    Dellino, Pierfrancesco; de Astis, Gianfilippo; La Volpe, Luigi; Mele, Daniela; Sulpizio, Roberto

    2010-05-01

    The analysis of stratigraphy and of pyroclastic deposits particle features allowed the reconstruction of the volcanic history of La Fossa di Vulcano. An eruptive scenario driven by superficial phreatomagmatic explosions emerged. A statistical analysis of the pyroclastic Successions led to define a repetitive sequence of dilute pyroclastic density currents as the most probable events at short term, followed by fallout of dense ballistic blocks. The scale of such events is related to the amount of magma involved in each explosion. Events involving a million of cubic meters of magma are probable in view of what happened in the most recent eruptions. They led to the formation of hundreds of meters thick dilute pyroclastic density currents, moving down the volcano slope at velocities exceeding 50 m/sec. The dispersion of desnity currents affected the whole Vulcano Porto area, the Vulcanello area and also overrode the Fossa Caldera's rim, spreading over the Piano area. Similarly, older pyroclastic deposits erupted at different times (Piano Grotte dei Rossi formation, ~20-7.7 ka) from vents within La Fossa Caldera and before La Fossa Cone formation. They also were phreatomagmatic in origin and fed dilute pyroclastic density currents (PDC). They represent the eruptions with the highest magnitude on the Island. Therefore, for the aim of hazard assessment, these deposits from La Fossa Cone and La Fossa Caldera were used to depict eruptive scenarios at short term and at long term. On the base of physical models that make use of pyroclastic deposits particle features, the impact parameters for each scenario have been calculated. They are dynamic pressure and particle volumetric concentration of density currents, and impact energy of ballistic blocks. On this base, a quantitative hazard map is presented, which could be of direct use for territory planning and for the calculation of the expected damage.

  19. Optimization of arterial spin labeling MRI for quantitative tumor perfusion in a mouse xenograft model.

    PubMed

    Rajendran, Reshmi; Liang, Jieming; Tang, Mei Yee Annie; Henry, Brian; Chuang, Kai-Hsiang

    2015-08-01

    Perfusion is an important biomarker of tissue function and has been associated with tumor pathophysiology such as angiogenesis and hypoxia. Arterial spin labeling (ASL) MRI allows noninvasive and quantitative imaging of perfusion; however, the application in mouse xenograft tumor models has been challenging due to the low sensitivity and high perfusion heterogeneity. In this study, flow-sensitive alternating inversion recovery (FAIR) ASL was optimized for a mouse xenograft tumor. To assess the sensitivity and reliability for measuring low perfusion, the lumbar muscle was used as a reference region. By optimizing the number of averages and inversion times, muscle perfusion as low as 32.4 ± 4.8 (mean ± standard deviation) ml/100 g/min could be measured in 20 min at 7 T with a quantification error of 14.4 ± 9.1%. Applying the optimized protocol, heterogeneous perfusion ranging from 49.5 to 211.2 ml/100 g/min in a renal carcinoma was observed. To understand the relationship with tumor pathology, global and regional tumor perfusion was compared with histological staining of blood vessels (CD34), hypoxia (CAIX) and apoptosis (TUNEL). No correlation was observed when the global tumor perfusion was compared with these pathological parameters. Regional analysis shows that areas of high perfusion had low microvessel density, which was due to larger vessel area compared with areas of low perfusion. Nonetheless, these were not correlated with hypoxia or apoptosis. The results suggest that tumor perfusion may reflect certain aspect of angiogenesis, but its relationship with other pathologies needs further investigation. PMID:26104980

  20. Pedagogical implications of approaches to study in distance learning: developing models through qualitative and quantitative analysis.

    PubMed

    Carnwell, R

    2000-05-01

    The need for flexibility in the delivery of nurse education has been identified by various initiatives including: widening the entry gate; continuous professional development; and the specialist practitioner. Access to degree level programmes is creating the need to acquire academic credit through flexible learning. The aim of this study was to further develop relationships between the need for guidance, materials design and learning styles and strategies and how these impact upon the construction of meaning. The study is based on interviews of 20 female community nurses purposively selected from the 96 respondents who had previously completed a survey questionnaire. The interviews were underpinned by theories relating to learning styles and approaches to study. Of particular concern was how these variables are mediated by student context, personal factors and materials design, to influence the need for support and guidance. The interview transcripts were first analysed using open and axial coding. Three approaches to study emerged from the data - systematic waders, speedy-focusers and global dippers - which were linked to other concepts and categories. Categories were then assigned numerical codes and subjected to logistical regression analysis. The attributes of the three approaches to study, arising from both qualitative and quantitative analysis, are explained in detail. The pedagogical implications of the three approaches to study are explained by their predicted relationships to other variables, such as support and guidance, organization of study, materials design and role of the tutor. The global dipper approach is discussed in more detail due to its association with a variety of predictor variables, not associated with the other two approaches to study. A feedback model is then developed to explore the impact of guidance on the global dipper approach. The paper makes recommendations for guidance to students using different approaches to study in distance

  1. Quantitative Histologic Evidence of Amifostine Induced Cytoprotection in an Irradiated Murine Model of Mandibular Distraction Osteogenesis

    PubMed Central

    Tchanque-Fossuo, Catherine N.; Donneys, Alexis; Razdolsky, Elizabeth R.; Monson, Laura; Farberg, Aaron S.; Deshpande, Sagar S.; Sarhaddi, Deniz; Poushanchi, Behdod; Goldstein, Steven A.; Buchman, Steven R.

    2012-01-01

    Background Head and neck cancer (HNC) management requires adjuvant radiation therapy (XRT). The authors have previously demonstrated the damaging effect of a human equivalent dose of radiation (HEDR) on a murine mandibular model of distraction osteogenesis (DO). Utilizing quantitative histomorphometry (QHM), our specific aim is to objectively measure the radio-protective effects of Amifostine (AMF) on the cellular integrity and tissue quality of an irradiated and distracted regenerate. Methods Sprague Dawley rats were randomly assigned into 2 groups: XRT/DO and AMF/XRT/DO, which received AMF prior to XRT. Both groups were given HEDR in 5 fractionated doses and underwent a left mandibular osteotomy with bilateral fixator placement. Distraction to 5.1mm was followed by a 28-day consolidation period. Left hemimandibles were harvested. QHM was performed for osteocyte count (Oc), empty lacunae (EL), Bone Volume/Tissue Volume (BV/TV) and Osteoid Volume/Tissue Volume (OV/TV) ratios. Results AMF/XRT/DO exhibited bony bridging as opposed to XRT/DO fibrous unions. QHM analysis revealed statistically significant higher Oc and BV/TV ratio in AMF-treated mandibles compared with irradiated mandibles. There was a corresponding decrease in EL and the ratio of OV/TV between AMF/XRT/DO and XRT/DO. Conclusion We have successfully established the significant osseous cytoprotective and histoprotective capacity of AMF on DO in the face of XRT. AMF-sparing effect on bone cellularity correlated with an increase in bony union and elimination of fibrous union. We posit that the demonstration of similar efficacy of AMF in the clinic may allow the successful implementation of DO as a viable reconstructive option for HNC in the future. PMID:22878481

  2. Quantitative Analysis and Modeling of 3-D TSV-Based Power Delivery Architectures

    NASA Astrophysics Data System (ADS)

    He, Huanyu

    As 3-D technology enters the commercial production stage, it is critical to understand different 3-D power delivery architectures on the stacked ICs and packages with through-silicon vias (TSVs). Appropriate design, modeling, analysis, and optimization approaches of the 3-D power delivery system are of foremost significance and great practical interest to the semiconductor industry in general. Based on fundamental physics of 3-D integration components, the objective of this thesis work is to quantitatively analyze the power delivery for 3D-IC systems, develop appropriate physics-based models and simulation approaches, understand the key issues, and provide potential solutions for design of 3D-IC power delivery architectures. In this work, a hybrid simulation approach is adopted as the major approach along with analytical method to examine 3-D power networks. Combining electromagnetic (EM) tools and circuit simulators, the hybrid approach is able to analyze and model micrometer-scale components as well as centimeter-scale power delivery system with high accuracy and efficiency. The parasitic elements of the components on the power delivery can be precisely modeled by full-wave EM solvers. Stack-up circuit models for the 3-D power delivery networks (PDNs) are constructed through a partition and assembly method. With the efficiency advantage of the SPICE circuit simulation, the overall 3-D system power performance can be analyzed and the 3-D power delivery architectures can be evaluated in a short computing time. The major power delivery issues are the voltage drop (IR drop) and voltage noise. With a baseline of 3-D power delivery architecture, the on-chip PDNs of TSV-based chip stacks are modeled and analyzed for the IR drop and AC noise. The basic design factors are evaluated using the hybrid approach, such as the number of stacked chips, the number of TSVs, and the TSV arrangement. Analytical formulas are also developed to evaluate the IR drop in 3-D chip stack in

  3. Existence and stability of periodic solution of a Lotka-Volterra predator-prey model with state dependent impulsive effects

    NASA Astrophysics Data System (ADS)

    Nie, Linfei; Peng, Jigen; Teng, Zhidong; Hu, Lin

    2009-02-01

    According to biological and chemical control strategy for pest, we investigate the dynamic behavior of a Lotka-Volterra predator-prey state-dependent impulsive system by releasing natural enemies and spraying pesticide at different thresholds. By using Poincaré map and the properties of the Lambert W function, we prove that the sufficient conditions for the existence and stability of semi-trivial solution and positive periodic solution. Numerical simulations are carried out to illustrate the feasibility of our main results.

  4. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    SciTech Connect

    Rettmann, Maryam E. Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Packer, Douglas L.; Dalegrave, Charles; Kolasa, Mark W.

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  5. Quantitative rainfall metrics for comparing volumetric rainfall retrievals to fine scale models

    NASA Astrophysics Data System (ADS)

    Collis, Scott; Tao, Wei-Kuo; Giangrande, Scott; Fridlind, Ann; Theisen, Adam; Jensen, Michael

    2013-04-01

    Precipitation processes play a significant role in the energy balance of convective systems for example, through latent heating and evaporative cooling. Heavy precipitation "cores" can also be a proxy for vigorous convection and vertical motions. However, comparisons between rainfall rate retrievals from volumetric remote sensors with forecast rain fields from high-resolution numerical weather prediction simulations are complicated by differences in the location and timing of storm morphological features. This presentation will outline a series of metrics for diagnosing the spatial variability and statistical properties of precipitation maps produced both from models and retrievals. We include existing metrics such as Contoured by Frequency Altitude Diagrams (Yuter and Houze 1995) and Statistical Coverage Products (May and Lane 2009) and propose new metrics based on morphology, cell and feature based statistics. Work presented focuses on observations from the ARM Southern Great Plains radar network consisting of three agile X-Band radar systems with a very dense coverage pattern and a C Band system providing site wide coverage. By combining multiple sensors resolutions of 250m2 can be achieved, allowing improved characterization of fine-scale features. Analyses compare data collected during the Midlattitude Continental Convective Clouds Experiment (MC3E) with simulations of observed systems using the NASA Unified Weather Research and Forecasting model. May, P. T., and T. P. Lane, 2009: A method for using weather radar data to test cloud resolving models. Meteorological Applications, 16, 425-425, doi:10.1002/met.150, 10.1002/met.150. Yuter, S. E., and R. A. Houze, 1995: Three-Dimensional Kinematic and Microphysical Evolution of Florida Cumulonimbus. Part II: Frequency Distributions of Vertical Velocity, Reflectivity, and Differential Reflectivity. Mon. Wea. Rev., 123, 1941-1963, doi:10.1175/1520-0493(1995)123<1941:TDKAME>2.0.CO;2.

  6. Quantitation of active pharmaceutical ingredients and excipients in powder blends using designed multivariate calibration models by near-infrared spectroscopy.

    PubMed

    Li, Weiyong; Worosila, Gregory D

    2005-05-13

    This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearat