Science.gov

Sample records for existing models quantitatively

  1. Comment on "Can existing models quantitatively describe the mixing behavior of acetone with water" [J. Chem. Phys. 130, 124516 (2009)].

    PubMed

    Kang, Myungshim; Perera, Aurelien; Smith, Paul E

    2009-10-21

    A recent publication indicated that simulations of acetone-water mixtures using the KBFF model for acetone indicate demixing at mole fractions less than 0.28 of acetone, in disagreement with experiment and two previously published studies. Here, we indicate some inconsistancies in the current study which could help to explain these differences. PMID:20568888

  2. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  3. Comparative analysis of existing disinfection models.

    PubMed

    Andrianarison, T; Jupsin, H; Ouali, A; Vasel, J-L

    2010-01-01

    For a long time Marais's model has been the main tool for disinfection prediction in waste stabilization ponds (WSPs), although various authors have developed other disinfection models. Some ten other empirical models have been listed over the past fifteen years. Unfortunately, their predictions of disinfection in a given pond are very different. The existing models are too empirical to give reliable predictions: often their explanatory variables were chosen arbitrarily. In this work, we try to demonstrate that if influent variables have daily variations, the use of their average values in simulations may overestimate the disinfection effect. New methods are thus needed to provide better fittings of the models. Better knowledge of the mechanisms involved is needed to improve disinfection models. PMID:20182074

  4. Generating Navigation Models from Existing Building Data

    NASA Astrophysics Data System (ADS)

    Liu, L.; Zlatanova, S.

    2013-11-01

    Research on indoor navigation models mainly focuses on geometric and logical models .The models are enriched with specific semantic information which supports localisation, navigation and guidance. Geometric models provide information about the structural (physical) distribution of spaces in a building, while logical models indicate relationships (connectivity and adjacency) between the spaces. In many cases geometric models contain virtual subdivisions to identify smaller spaces which are of interest for navigation (e.g. reception area) or make use of different semantics. The geometric models are used as basis to automatically derive logical models. However, there is seldom reported research on how to automatically realize such geometric models from existing building data (as floor plans) or indoor standards (CityGML LOD4 or IFC). In this paper, we present our experiments on automatic creation of logical models from floor plans and CityGML LOD4. For the creation we adopt the Indoor Spatial Navigation Model (INSM) which is specifically designed to support indoor navigation. The semantic concepts in INSM differ from daily used notations of indoor spaces such as rooms and corridors but they facilitate automatic creation of logical models.

  5. Modeling Truth Existence in Truth Discovery

    PubMed Central

    Zhi, Shi; Zhao, Bo; Tong, Wenzhu; Gao, Jing; Yu, Dian; Ji, Heng; Han, Jiawei

    2015-01-01

    When integrating information from multiple sources, it is common to encounter conflicting answers to the same question. Truth discovery is to infer the most accurate and complete integrated answers from conflicting sources. In some cases, there exist questions for which the true answers are excluded from the candidate answers provided by all sources. Without any prior knowledge, these questions, named no-truth questions, are difficult to be distinguished from the questions that have true answers, named has-truth questions. In particular, these no-truth questions degrade the precision of the answer integration system. We address such a challenge by introducing source quality, which is made up of three fine-grained measures: silent rate, false spoken rate and true spoken rate. By incorporating these three measures, we propose a probabilistic graphical model, which simultaneously infers truth as well as source quality without any a priori training involving ground truth answers. Moreover, since inferring this graphical model requires parameter tuning of the prior of truth, we propose an initialization scheme based upon a quantity named truth existence score, which synthesizes two indicators, namely, participation rate and consistency rate. Compared with existing methods, our method can effectively filter out no-truth questions, which results in more accurate source quality estimation. Consequently, our method provides more accurate and complete answers to both has-truth and no-truth questions. Experiments on three real-world datasets illustrate the notable advantage of our method over existing state-of-the-art truth discovery methods. PMID:26705507

  6. LDEF data: Comparisons with existing models

    NASA Technical Reports Server (NTRS)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-01-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  7. LDEF data: Comparisons with existing models

    NASA Astrophysics Data System (ADS)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-04-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  8. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  9. Interpreting snowpack radiometry using currently existing microwave radiative transfer models

    NASA Astrophysics Data System (ADS)

    Kang, Do-Hyuk; Tang, Shurun; Kim, Edward J.

    2015-10-01

    A radiative transfer model (RTM) to calculate the snow brightness temperatures (Tb) is a critical element in terrestrial snow parameter retrieval from microwave remote sensing observations. The RTM simulates the Tb based on a layered snow by solving a set of microwave radiative transfer equations. Even with the same snow physical inputs to drive the RTM, currently existing models such as Microwave Emission Model of Layered Snowpacks (MEMLS), Dense Media Radiative Transfer (DMRT-QMS), and Helsinki University of Technology (HUT) models produce different Tb responses. To backwardly invert snow physical properties from the Tb, differences from RTMs are first to be quantitatively explained. To this end, this initial investigation evaluates the sources of perturbations in these RTMs, and reveals the equations where the variations are made among the three models. Modelling experiments are conducted by providing the same but gradual changes in snow physical inputs such as snow grain size, and snow density to the 3 RTMs. Simulations are conducted with the frequencies consistent with the Advanced Microwave Scanning Radiometer- E (AMSR-E) at 6.9, 10.7, 18.7, 23.8, 36.5, and 89.0 GHz. For realistic simulations, the 3 RTMs are simultaneously driven by the same snow physics model with the meteorological forcing datasets and are validated against the snow insitu samplings from the CLPX (Cold Land Processes Field Experiment) 2002-2003, and NoSREx (Nordic Snow Radar Experiment) 2009-2010.

  10. Interpreting snowpack radiometry using currently existing microwave radiative transfer models

    NASA Astrophysics Data System (ADS)

    Kang, D. H.; Tan, S.; Kim, E. J.

    2015-12-01

    A radiative transfer model (RTM) to calculate a snow brightness temperature (Tb) is a critical element to retrieve terrestrial snow from microwave remote sensing observations. The RTM simulates the Tb based on a layered snow by solving a set of microwave radiative transfer formulas. Even with the same snow physical inputs used for the RTM, currently existing models such as Microwave Emission Model of Layered Snowpacks (MEMLS), Dense Media Radiative Transfer (DMRT-Tsang), and Helsinki University of Technology (HUT) models produce different Tb responses. To backwardly invert snow physical properties from the Tb, the differences from the RTMs are to be quantitatively explained. To this end, the paper evaluates the sources of perturbations in the RTMs, and reveals the equations where the variations are made among three models. Investigations are conducted by providing the same but gradual changes in snow physical inputs such as snow grain size, and snow density to the 3 RTMs. Simulations are done with the frequencies consistent with the Advanced Microwave Scanning Radiometer-E (AMSR-E) at 6.9, 10.7, 18.7, 23.8, 36.5, and 89.0 GHz. For realistic simulations, the 3 RTMs are simultaneously driven by the same snow physics model with the meteorological forcing datasets and are validated from the snow core samplings from the CLPX (Cold Land Processes Field Experiment) 2002-2003, and NoSREx (Nordic Snow Radar Experiment) 2009-2010.

  11. The mathematics of cancer: integrating quantitative models.

    PubMed

    Altrock, Philipp M; Liu, Lin L; Michor, Franziska

    2015-12-01

    Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology. PMID:26597528

  12. Quantitative modeling of planetary magnetospheric magnetic fields

    NASA Technical Reports Server (NTRS)

    Walker, R. J.

    1979-01-01

    Three new quantitative models of the earth's magnetospheric magnetic field have recently been presented: the Olson-Pfitzer model, the Tsyganenko model, and the Voigt model. The paper reviews these models in some detail with emphasis on the extent to which they have succeeded in improving on earlier models. The models are compared with the observed field in both magnitude and direction. Finally, the application to other planetary magnetospheres of the techniques used to model the earth's magnetospheric magnetic field is briefly discussed.

  13. Performance evaluation of ExiStation HBV diagnostic system for hepatitis B virus DNA quantitation.

    PubMed

    Cha, Young Joo; Yoo, Soo Jin; Sohn, Yong-Hak; Kim, Hyun Soo

    2013-11-01

    The performance of a recently developed real-time PCR system, the ExiStation HBV diagnostic system, for quantitation of hepatitis B virus (HBV) in human blood was evaluated. The detection limit, reproducibility, cross-reactivity, and interference were evaluated as measures of analytical performance. For the comparison study, 100 HBV-positive blood samples and 100 HBV-negative samples from Korean Blood Bank Serum were used, and the results of the ExiStation HBV system showed good correlation with those obtained using the Cobas TaqMan (r2=0.9931) and Abbott real-time PCR systems (r2=0.9894). The lower limit of detection was measured as 9.55 IU/mL using WHO standards and the dynamic range was linear from 6.68 to 6.68×10(9) IU/mL using cloned plasmids. The within-run coefficient of variation (CV) was 9.4%, 2.1%, and 1.1%, and the total CV was 11.8%, 3.6%, and 1.7% at a concentration of 1.92 log10 IU/mL, 3.88 log10 IU/mL, and 6.84 log10 IU/mL, respectively. No cross-reactivity or interference was detected. The ExiStation HBV diagnostic system showed satisfactory analytical sensitivity, excellent reproducibility, no cross-reactivity, no interference, and high agreement with the Cobas TaqMan and Abbott real-time PCR systems, and is therefore a useful tool for the detection and monitoring of HBV infection. PMID:23892129

  14. Integrated Environmental Modeling: Quantitative Microbial Risk Assessment

    EPA Science Inventory

    The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...

  15. 6 Principles for Quantitative Reasoning and Modeling

    ERIC Educational Resources Information Center

    Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye

    2014-01-01

    Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…

  16. Quantitative Modeling of Earth Surface Processes

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.

  17. More details...
  18. Quantitative risk modeling in aseptic manufacture.

    PubMed

    Tidswell, Edward C; McGarvey, Bernard

    2006-01-01

    Expedient risk assessment of aseptic manufacturing processes offers unique opportunities for improved and sustained assurance of product quality. Contemporary risk assessments applied to aseptic manufacturing processes, however, are commonly handicapped by assumptions and subjectivity, leading to inexactitude. Quantitative risk modeling augmented with Monte Carlo simulations represents a novel, innovative, and more efficient means of risk assessment. This technique relies upon fewer assumptions and removes subjectivity to more swiftly generate an improved, more realistic, quantitative estimate of risk. The fundamental steps and requirements for an assessment of the risk of bioburden ingress into aseptically manufactured products are described. A case study exemplifies how quantitative risk modeling and Monte Carlo simulations achieve a more rapid and improved determination of the risk of bioburden ingress during the aseptic filling of a parenteral product. Although application of quantitative risk modeling is described here purely for the purpose of process improvement, the technique has far wider relevance in the assisted disposition of batches, cleanroom management, and the utilization of real-time data from rapid microbial monitoring technologies. PMID:17089696

  19. Quantitative modeling of soil genesis processes

    NASA Technical Reports Server (NTRS)

    Levine, E. R.; Knox, R. G.; Kerber, A. G.

    1992-01-01

    For fine spatial scale simulation, a model is being developed to predict changes in properties over short-, meso-, and long-term time scales within horizons of a given soil profile. Processes that control these changes can be grouped into five major process clusters: (1) abiotic chemical reactions; (2) activities of organisms; (3) energy balance and water phase transitions; (4) hydrologic flows; and (5) particle redistribution. Landscape modeling of soil development is possible using digitized soil maps associated with quantitative soil attribute data in a geographic information system (GIS) framework to which simulation models are applied.

  20. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  21. Quantitative indices of autophagy activity from minimal models

    PubMed Central

    2014-01-01

    Background A number of cellular- and molecular-level studies of autophagy assessment have been carried out with the help of various biochemical and morphological indices. Still there exists ambiguity for the assessment of the autophagy status and of the causal relationship between autophagy and related cellular changes. To circumvent such difficulties, we probe new quantitative indices of autophagy which are important for defining autophagy activation and further assessing its roles associated with different physiopathological states. Methods Our approach is based on the minimal autophagy model that allows us to understand underlying dynamics of autophagy from biological experiments. Specifically, based on the model, we reconstruct the experimental context-specific autophagy profiles from the target autophagy system, and two quantitative indices are defined from the model-driven profiles. The indices are then applied to the simulation-based analysis, for the specific and quantitative interpretation of the system. Results Two quantitative indices measuring autophagy activities in the induction of sequestration fluxes and in the selective degradation are proposed, based on the model-driven autophagy profiles such as the time evolution of autophagy fluxes, levels of autophagosomes/autolysosomes, and corresponding cellular changes. Further, with the help of the indices, those biological experiments of the target autophagy system have been successfully analyzed, implying that the indices are useful not only for defining autophagy activation but also for assessing its role in a specific and quantitative manner. Conclusions Such quantitative autophagy indices in conjunction with the computer-aided analysis should provide new opportunities to characterize the causal relationship between autophagy activity and the corresponding cellular change, based on the system-level understanding of the autophagic process at good time resolution, complementing the current in vivo and in

  1. Competitive speciation in quantitative genetic models.

    PubMed

    Drossel, B; Mckane, A

    2000-06-01

    We study sympatric speciation due to competition in an environment with a broad distribution of resources. We assume that the trait under selection is a quantitative trait, and that mating is assortative with respect to this trait. Our model alternates selection according to Lotka-Volterra-type competition equations, with reproduction using the ideas of quantitative genetics. The recurrence relations defined by these equations are studied numerically and analytically. We find that when a population enters a new environment, with a broad distribution of unexploited food sources, the population distribution broadens under a variety of conditions, with peaks at the edge of the distribution indicating the formation of subpopulations. After a long enough time period, the population can split into several subpopulations with little gene flow between them. PMID:10816369

  2. The existence of amorphous phase in Portland cements: Physical factors affecting Rietveld quantitative phase analysis

    SciTech Connect

    Snellings, Ruben Bazzoni, Amélie Scrivener, Karen

    2014-05-01

    Rietveld quantitative phase analysis has become a widespread tool for the characterization of Portland cement, both for research and production control purposes. One of the major remaining points of debate is whether Portland cements contain amorphous content or not. This paper presents detailed analyses of the amorphous phase contents in a set of commercial Portland cements, clinker, synthetic alite and limestone by Rietveld refinement of X-ray powder diffraction measurements using both external and internal standard methods. A systematic study showed that the sample preparation and comminution procedure is closely linked to the calculated amorphous contents. Particle size reduction by wet-grinding lowered the calculated amorphous contents to insignificant quantities for all materials studied. No amorphous content was identified in the final analysis of the Portland cements under investigation.

  3. Facilities Management of Existing School Buildings: Two Models.

    ERIC Educational Resources Information Center

    Building Technology, Inc., Silver Spring, MD.

    While all school districts are responsible for the management of their existing buildings, they often approach the task in different ways. This document presents two models that offer ways a school district administration, regardless of size, may introduce activities into its ongoing management process that will lead to improvements in earthquake…

  4. Training of Existing Workers: Issues, Incentives and Models. Support Document

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This document was produced by the authors based on their research for the report, "Training of Existing Workers: Issues, Incentives and Models," (ED495138) and is an added resource for further information. This support document is divided into the following sections: (1) The Retail Industry--A Snapshot; (2) Case Studies--Hardware, Retail Industry…

  5. On the existence of monodromies for the Rabi model

    NASA Astrophysics Data System (ADS)

    Carneiro da Cunha, Bruno; Carvalho de Almeida, Manuela; Rabelo de Queiroz, Amílcar

    2016-05-01

    We discuss the existence of monodromies associated with the singular points of the eigenvalue problem for the Rabi model. The complete control of the full monodromy data requires the taming of the Stokes phenomenon associated with the unique irregular singular point. The monodromy data, in particular, the composite monodromy, are written in terms of the parameters of the model via the isomonodromy method and the τ function of the Painlevé V. These data provide a systematic way to obtain the quantized spectrum of the Rabi model.

  6. Quantitative risk modelling for new pharmaceutical compounds.

    PubMed

    Tang, Zhengru; Taylor, Mark J; Lisboa, Paulo; Dyas, Mark

    2005-11-15

    The process of discovering and developing new drugs is long, costly and risk-laden. Faced with a wealth of newly discovered compounds, industrial scientists need to target resources carefully to discern the key attributes of a drug candidate and to make informed decisions. Here, we describe a quantitative approach to modelling the risk associated with drug development as a tool for scenario analysis concerning the probability of success of a compound as a potential pharmaceutical agent. We bring together the three strands of manufacture, clinical effectiveness and financial returns. This approach involves the application of a Bayesian Network. A simulation model is demonstrated with an implementation in MS Excel using the modelling engine Crystal Ball. PMID:16257374

  7. Quantitative Models of CAI Rim Layer Growth

    NASA Astrophysics Data System (ADS)

    Ruzicka, A.; Boynton, W. V.

    1995-09-01

    Many hypotheses have been proposed to account for the ~50 micrometer-thick layer sequences (Wark-Lovering rims) that typically surround coarse-grained Ca,Al-rich inclusions (CAIs), but to date no consensus has emerged on how these rims formed. A two-step process-- flash heating of CAIs to produce a refractory residue on the margins of CAIs [1,2,3], followed by reaction and diffusion between CAIs or the refractory residue and an external medium rich in Mg, Si and other ferromagnesian and volatile elements to form the layers [3,4,5]-- may have formed the rims. We have tested the second step of this process quantitatively, and show that many, but not all, of the layering characteristics of CAI rims in the Vigarano, Leoville, and Efremovka CV3 chondrites can be explained by steady-state reaction and diffusion between CAIs and an external medium rich in Mg and Si. Moreover, observed variations in the details of the layering from one CAI to another can be explained primarily by differences in the identity and composition of the external medium, which appears to have included vapor alone, vapor + olivine, and olivine +/- clinopyroxene +/- vapor. An idealized layer sequence for CAI rims in Vigarano, Leoville, and Efremovka can be represented as MSF|S|AM|D|O, where MSF = melilite (M) + spinel (S) + fassaite (F) in the interior of CAIs; S = spinel-rich layer; AM = a layer consisting either of anorthite (A) alone, or M alone, or both A and M; D = a clinopyroxene layer consisting mainly of aluminous diopside (D) that is zoned to fassaite towards the CAI; and O = olivine-rich layer, composed mainly of individually zoned olivine grains that apparently pre-existed layer formation [3]. A or M are absent between the S and D layers in roughly half of the rims. The O layer varies considerably in thickness (0-60 micrometers thick) and in porosity from rim to rim, with olivine grains either tightly intergrown to form a compact layer or arranged loosely on the outer surfaces of the CAIs

  8. Global quantitative modeling of chromatin factor interactions.

    PubMed

    Zhou, Jian; Troyanskaya, Olga G

    2014-03-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the "chromatin codes") remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles--we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  9. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions. PMID:20810949

  10. Physiologically based quantitative modeling of unihemispheric sleep.

    PubMed

    Kedziora, D J; Abeysuriya, R G; Phillips, A J K; Robinson, P A

    2012-12-01

    Unihemispheric sleep has been observed in numerous species, including birds and aquatic mammals. While knowledge of its functional role has been improved in recent years, the physiological mechanisms that generate this behavior remain poorly understood. Here, unihemispheric sleep is simulated using a physiologically based quantitative model of the mammalian ascending arousal system. The model includes mutual inhibition between wake-promoting monoaminergic nuclei (MA) and sleep-promoting ventrolateral preoptic nuclei (VLPO), driven by circadian and homeostatic drives as well as cholinergic and orexinergic input to MA. The model is extended here to incorporate two distinct hemispheres and their interconnections. It is postulated that inhibitory connections between VLPO nuclei in opposite hemispheres are responsible for unihemispheric sleep, and it is shown that contralateral inhibitory connections promote unihemispheric sleep while ipsilateral inhibitory connections promote bihemispheric sleep. The frequency of alternating unihemispheric sleep bouts is chiefly determined by sleep homeostasis and its corresponding time constant. It is shown that the model reproduces dolphin sleep, and that the sleep regimes of humans, cetaceans, and fur seals, the latter both terrestrially and in a marine environment, require only modest changes in contralateral connection strength and homeostatic time constant. It is further demonstrated that fur seals can potentially switch between their terrestrial bihemispheric and aquatic unihemispheric sleep patterns by varying just the contralateral connection strength. These results provide experimentally testable predictions regarding the differences between species that sleep bihemispherically and unihemispherically. PMID:22960411

  11. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  12. First Principles Quantitative Modeling of Molecular Devices

    NASA Astrophysics Data System (ADS)

    Ning, Zhanyu

    In this thesis, we report theoretical investigations of nonlinear and nonequilibrium quantum electronic transport properties of molecular transport junctions from atomistic first principles. The aim is to seek not only qualitative but also quantitative understanding of the corresponding experimental data. At present, the challenges to quantitative theoretical work in molecular electronics include two most important questions: (i) what is the proper atomic model for the experimental devices? (ii) how to accurately determine quantum transport properties without any phenomenological parameters? Our research is centered on these questions. We have systematically calculated atomic structures of the molecular transport junctions by performing total energy structural relaxation using density functional theory (DFT). Our quantum transport calculations were carried out by implementing DFT within the framework of Keldysh non-equilibrium Green's functions (NEGF). The calculated data are directly compared with the corresponding experimental measurements. Our general conclusion is that quantitative comparison with experimental data can be made if the device contacts are correctly determined. We calculated properties of nonequilibrium spin injection from Ni contacts to octane-thiolate films which form a molecular spintronic system. The first principles results allow us to establish a clear physical picture of how spins are injected from the Ni contacts through the Ni-molecule linkage to the molecule, why tunnel magnetoresistance is rapidly reduced by the applied bias in an asymmetric manner, and to what extent ab initio transport theory can make quantitative comparisons to the corresponding experimental data. We found that extremely careful sampling of the two-dimensional Brillouin zone of the Ni surface is crucial for accurate results in such a spintronic system. We investigated the role of contact formation and its resulting structures to quantum transport in several molecular

  13. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  14. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  15. Quantitative modeling of multiscale neural activity

    NASA Astrophysics Data System (ADS)

    Robinson, Peter A.; Rennie, Christopher J.

    2007-01-01

    The electrical activity of the brain has been observed for over a century and is widely used to probe brain function and disorders, chiefly through the electroencephalogram (EEG) recorded by electrodes on the scalp. However, the connections between physiology and EEGs have been chiefly qualitative until recently, and most uses of the EEG have been based on phenomenological correlations. A quantitative mean-field model of brain electrical activity is described that spans the range of physiological and anatomical scales from microscopic synapses to the whole brain. Its parameters measure quantities such as synaptic strengths, signal delays, cellular time constants, and neural ranges, and are all constrained by independent physiological measurements. Application of standard techniques from wave physics allows successful predictions to be made of a wide range of EEG phenomena, including time series and spectra, evoked responses to stimuli, dependence on arousal state, seizure dynamics, and relationships to functional magnetic resonance imaging (fMRI). Fitting to experimental data also enables physiological parameters to be infered, giving a new noninvasive window into brain function, especially when referenced to a standardized database of subjects. Modifications of the core model to treat mm-scale patchy interconnections in the visual cortex are also described, and it is shown that resulting waves obey the Schroedinger equation. This opens the possibility of classical cortical analogs of quantum phenomena.

  16. Toward quantitative modeling of silicon phononic thermocrystals

    SciTech Connect

    Lacatena, V.; Haras, M.; Robillard, J.-F. Dubois, E.; Monfray, S.; Skotnicki, T.

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  17. Toward quantitative modeling of silicon phononic thermocrystals

    NASA Astrophysics Data System (ADS)

    Lacatena, V.; Haras, M.; Robillard, J.-F.; Monfray, S.; Skotnicki, T.; Dubois, E.

    2015-03-01

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of "thermocrystals" or "nanophononic crystals" that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known "electron crystal-phonon glass" dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  18. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  19. Existence of Periodic Solutions for a Modified Growth Solow Model

    NASA Astrophysics Data System (ADS)

    Fabião, Fátima; Borges, Maria João

    2010-10-01

    In this paper we analyze the dynamic of the Solow growth model with a Cobb-Douglas production function. For this purpose, we consider that the labour growth rate, L'(t)/L(t), is a T-periodic function, for a fixed positive real number T. We obtain the closed form solutions for the fundamental Solow equation with the new description of L(t). Using notions of the qualitative theory of ordinary differential equations and nonlinear functional analysis, we prove that there exists one T-periodic solution for the Solow equation. From the economic point of view this is a new result which allows a more realistic interpretation of the stylized facts.

  20. Comparative Application of Capacity Models for Seismic Vulnerability Evaluation of Existing RC Structures

    SciTech Connect

    Faella, C.; Lima, C.; Martinelli, E.; Nigro, E.

    2008-07-08

    Seismic vulnerability assessment of existing buildings is one of the most common tasks in which Structural Engineers are currently engaged. Since, its is often a preliminary step to approach the issue of how to retrofit non-seismic designed and detailed structures, it plays a key role in the successful choice of the most suitable strengthening technique. In this framework, the basic information for both seismic assessment and retrofitting is related to the formulation of capacity models for structural members. Plenty of proposals, often contradictory under the quantitative standpoint, are currently available within the technical and scientific literature for defining the structural capacity in terms of force and displacements, possibly with reference to different parameters representing the seismic response. The present paper shortly reviews some of the models for capacity of RC members and compare them with reference to two case studies assumed as representative of a wide class of existing buildings.

  1. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  2. Quantitative assessment of computational models for retinotopic map formation

    PubMed Central

    Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2014-01-01

    ABSTRACT Molecular and activity‐based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity‐based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2‐EphA3 ki/ki, Isl2‐EphA3 ki/+, ephrin‐A2,A3,A5 triple knock‐out (TKO), and Math5 −/− (Atoh7). Two models successfully reproduced the extent of the Math5 −/− anteromedial projection, but only one of those could account for the collapse point in Isl2‐EphA3 ki/+. The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin‐A2,A3,A5 TKO phenotype, suggesting either an incomplete knock‐out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. © 2014 Wiley Periodicals, Inc. Develop Neurobiol 75: 641–666, 2015 PMID:25367067

  3. Quantitative assessment of computational models for retinotopic map formation.

    PubMed

    Hjorth, J J Johannes; Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2015-06-01

    Molecular and activity-based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity-based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2-EphA3(ki/ki), Isl2-EphA3(ki/+), ephrin-A2,A3,A5 triple knock-out (TKO), and Math5(-/-) (Atoh7). Two models successfully reproduced the extent of the Math5(-/-) anteromedial projection, but only one of those could account for the collapse point in Isl2-EphA3(ki/+). The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin-A2,A3,A5 TKO phenotype, suggesting either an incomplete knock-out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. PMID:25367067

  4. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    SciTech Connect

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  5. Quantitative analysis of numerical solvers for oscillatory biomolecular system models

    PubMed Central

    Quo, Chang F; Wang, May D

    2008-01-01

    Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible

  6. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals

    EPA Science Inventory

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This manuscript reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, inver...

  7. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    PubMed

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator. PMID:26451831

  8. Existing Soil Carbon Models Do Not Apply to Forested Wetlands.

    SciTech Connect

    Trettin, C C; Song, B; Jurgensen, M F; Li, C

    2001-09-14

    Evaluation of 12 widely used soil carbon models to determine applicability to wetland ecosystems. For any land area that includes wetlands, none of the individual models would produce reasonable simulations based on soil processes. Study presents a wetland soil carbon model framework based on desired attributes, the DNDC model and components of the CENTURY and WMEM models. Proposed synthesis would be appropriate when considering soil carbon dynamics at multiple spatial scales and where the land area considered includes both wetland and upland ecosystems.

  9. Training of Existing Workers: Issues, Incentives and Models

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This report presents issues associated with incentives for training existing workers in small to medium-sized firms, identified through a small sample of case studies from the retail, manufacturing, and building and construction industries. While the majority of employers recognise workforce skill levels are fundamental to the success of the…

  10. A Quantitative Software Risk Assessment Model

    NASA Technical Reports Server (NTRS)

    Lee, Alice

    2002-01-01

    This slide presentation reviews a risk assessment model as applied to software development. the presentation uses graphs to demonstrate basic concepts of software reliability. It also discusses the application to the risk model to the software development life cycle.

  11. What Are We Doing When We Translate from Quantitative Models?

    PubMed Central

    Critchfield, Thomas S; Reed, Derek D

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may convey concepts that are difficult to capture in words. To support this point, we provide a nontechnical introduction to selected aspects of quantitative analysis; consider some issues that translational investigators (and, potentially, practitioners) confront when attempting to translate from quantitative models; and discuss examples of relevant translational studies. We conclude that, where behavior-science translation is concerned, the quantitative features of quantitative models cannot be ignored without sacrificing conceptual precision, scientific and practical insights, and the capacity of the basic and applied wings of behavior analysis to communicate effectively. PMID:22478533

  12. A review: Quantitative models for lava flows on Mars

    NASA Technical Reports Server (NTRS)

    Baloga, S. M.

    1987-01-01

    The purpose of this abstract is to review and assess the application of quantitative models (Gratz numerical correlation model, radiative loss model, yield stress model, surface structure model, and kinematic wave model) of lava flows on Mars. These theoretical models were applied to Martian flow data to aid in establishing the composition of the lava or to determine other eruption conditions such as eruption rate or duration.

  13. Comparison of Existing Response Criteria in Patients with Hepatocellular Carcinoma Treated with Transarterial Chemoembolization Using a 3D Quantitative Approach

    PubMed Central

    Tacher, Vania; Lin, MingDe; Duran, Rafael; Yarmohammadi, Hooman; Lee, Howard; Chapiro, Julius; Chao, Michael; Wang, Zhijun; Frangakis, Constantine; Sohn, Jae Ho; Maltenfort, Mitchell Gil; Pawlik, Timothy; Geschwind, Jean-François

    2015-01-01

    Purpose To compare currently available non-three-dimensional methods (Response Evaluation Criteria in Solid Tumors [RECIST], European Association for Study of the Liver [EASL], modified RECIST [mRECIST[) with three-dimensional (3D) quantitative methods of the index tumor as early response markers in predicting patient survival after initial transcatheter arterial chemoembolization (TACE). Materials and Methods This was a retrospective single-institution HIPAA-compliant and institutional review board–approved study. From November 2001 to November 2008, 491 consecutive patients underwent intraarterial therapy for liver cancer with either conventional TACE or TACE with drug-eluting beads. A diagnosis of hepatocellular carcinoma (HCC) was made in 290 of these patients. The response of the index tumor on pre- and post-TACE magnetic resonance images was assessed retrospectively in 78 treatment-naïve patients with HCC (63 male; mean age, 63 years ± 11 [standard deviation]). Each response assessment method (RECIST, mRECIST, EASL, and 3D methods of volumetric RECIST [vRECIST] and quantitative EASL [qEASL]) was used to classify patients as responders or nonresponders by following standard guidelines for the uni- and bidimensional measurements and by using the formula for a sphere for the 3D measurements. The Kaplan-Meier method with the log-rank test was performed for each method to evaluate its ability to help predict survival of responders and nonresponders. Uni- and multivariate Cox proportional hazard ratio models were used to identify covariates that had significant association with survival. Results The uni- and bidimensional measurements of RECIST (hazard ratio, 0.6; 95% confidence interval [CI]: 0.3, 1.0; P = .09), mRECIST (hazard ratio, 0.6; 95% CI: 0.6, 1.0; P = .05), and EASL (hazard ratio, 1.1; 95% CI: 0.6, 2.2; P = .75) did not show a significant difference in survival between responders and nonresponders, whereas vRECIST (hazard ratio, 0.6; 95% CI: 0.3, 1

  14. A Transformative Model for Undergraduate Quantitative Biology Education

    ERIC Educational Resources Information Center

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating…

  15. Mathematical Existence Results for the Doi-Edwards Polymer Model

    NASA Astrophysics Data System (ADS)

    Chupin, Laurent

    2016-07-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  16. Application of existing design software to problems in neuronal modeling.

    PubMed

    Vranić-Sowers, S; Fleshman, J W

    1994-03-01

    In this communication, we describe the application of the Valid/Analog Design Tools circuit simulation package called PC Workbench to the problem of modeling the electrical behavior of neural tissue. A nerve cell representation as an equivalent electrical circuit using compartmental models is presented. Several types of nonexcitable and excitable membranes are designed, and simulation results for different types of electrical stimuli are compared to the corresponding analytical data. It is shown that the hardware/software platform and the models developed constitute an accurate, flexible, and powerful way to study neural tissue. PMID:8045583

  17. Existence of solutions for a host-parasite model

    NASA Astrophysics Data System (ADS)

    Milner, Fabio Augusto; Patton, Curtis Allan

    2001-12-01

    The sea bass Dicentrarchus labrax has several gill ectoparasites. Diplectanum aequans (Plathelminth, Monogenea) is one of these species. Under certain demographic conditions, this flat worm can trigger pathological problems, in particular in fish farms. The life cycle of the parasite is described and a model for the dynamics of its interaction with the fish is described and analyzed. The model consists of a coupled system of ordinary differential equations and one integro-differential equation.

  18. Towards a quantitative model of the post-synaptic proteome.

    PubMed

    Sorokina, Oksana; Sorokin, Anatoly; Armstrong, J Douglas

    2011-10-01

    The postsynaptic compartment of the excitatory glutamatergic synapse contains hundreds of distinct polypeptides with a wide range of functions (signalling, trafficking, cell-adhesion, etc.). Structural dynamics in the post-synaptic density (PSD) are believed to underpin cognitive processes. Although functionally and morphologically diverse, PSD proteins are generally enriched with specific domains, which precisely define the mode of clustering essential for signal processing. We applied a stochastic calculus of domain binding provided by a rule-based modelling approach to formalise the highly combinatorial signalling pathway in the PSD and perform the numerical analysis of the relative distribution of protein complexes and their sizes. We specified the combinatorics of protein interactions in the PSD by rules, taking into account protein domain structure, specific domain affinity and relative protein availability. With this model we interrogated the critical conditions for the protein aggregation into large complexes and distribution of both size and composition. The presented approach extends existing qualitative protein-protein interaction maps by considering the quantitative information for stoichiometry and binding properties for the elements of the network. This results in a more realistic view of the postsynaptic proteome at the molecular level. PMID:21874189

  19. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  20. Quantitative model of the Cerro Prieto field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-03-01

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  1. Quantitative Model of the Cerro Prieto Field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-01-21

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  2. Exploring Higher Education Business Models ("If Such a Thing Exists")

    ERIC Educational Resources Information Center

    Harney, John O.

    2013-01-01

    The global economic recession has caused students, parents, and policymakers to reevaluate personal and societal investments in higher education--and has prompted the realization that traditional higher ed "business models" may be unsustainable. Predicting a shakeout, most presidents expressed confidence for their own school's ability to…

  3. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  4. Quantitative modeling of quartz vein sealing

    NASA Astrophysics Data System (ADS)

    Wendler, Frank; Okamoto, Atsushi; Schwarz, Jens-Oliver; Enzmann, Frieder; Blum, Philipp

    2014-05-01

    Mineral precipitation significantly effects many aspects of fluid-rock interaction across all length scales, as the dynamical change of permeability, of mechanical interaction and redistribution of dissolved material. The hydrothermal growth of quartz establishes one of the most important mineralization processes in fractures. Tectonically caused fracturing, deformation and fluid transport leaves clear detectable traces in the microstructure of the mineralized veins. As these patterns give hints on the deformation history and the fluid pathways through former fracture networks, accurate spatio-temporal modeling of vein mineralization is of special interest, and the objective of this study. Due to the intricate polycrystalline geometries involved, the underlying physical processes like diffusion, advection and crystal growth have to be captured at the grain scale. To this end, we adapt a thermodynamically consistent phase-field model (PFM), which combines a kinetic growth law and mass transport equations with irreversible thermodynamics of interfaces and bulk phases. Each grain in the simulation domain is captured by a phase field with individual orientation given by three Euler angles. The model evolves in discrete time steps using a finite difference algorithm on a regular grid, optimized for large grain assemblies. The underlying processes are highly nonlinear, and for geological samples, boundary conditions as well as many of the physical parameters are not precisely known. One motivation in this study is to validate the adequately parameterized model vs. hydrothermal experiments under defined (p,T,c) conditions. Different from former approaches in vein growth simulation, the PFM is configured using thermodynamic data from established geochemical models. Previously conducted batch flow experiments of hydrothermal quartz growth were analyzed with electron backscatter diffraction (EBSD) and used to calibrate the unknown kinetic anisotropy parameters. In the

  5. Steps toward quantitative infrasound propagation modeling

    NASA Astrophysics Data System (ADS)

    Waxler, Roger; Assink, Jelle; Lalande, Jean-Marie; Velea, Doru

    2016-04-01

    Realistic propagation modeling requires propagation models capable of incorporating the relevant physical phenomena as well as sufficiently accurate atmospheric specifications. The wind speed and temperature gradients in the atmosphere provide multiple ducts in which low frequency sound, infrasound, can propagate efficiently. The winds in the atmosphere are quite variable, both temporally and spatially, causing the sound ducts to fluctuate. For ground to ground propagation the ducts can be borderline in that small perturbations can create or destroy a duct. In such cases the signal propagation is very sensitive to fluctuations in the wind, often producing highly dispersed signals. The accuracy of atmospheric specifications is constantly improving as sounding technology develops. There is, however, a disconnect between sound propagation and atmospheric specification in that atmospheric specifications are necessarily statistical in nature while sound propagates through a particular atmospheric state. In addition infrasonic signals can travel to great altitudes, on the order of 120 km, before refracting back to earth. At such altitudes the atmosphere becomes quite rare causing sound propagation to become highly non-linear and attenuating. Approaches to these problems will be presented.

  6. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  7. Refining the quantitative pathway of the Pathways to Mathematics model.

    PubMed

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. PMID:25521665

  8. Stoffenmanager exposure model: development of a quantitative algorithm.

    PubMed

    Tielemans, Erik; Noy, Dook; Schinkel, Jody; Heussen, Henri; Van Der Schaaf, Doeke; West, John; Fransman, Wouter

    2008-08-01

    In The Netherlands, the web-based tool called 'Stoffenmanager' was initially developed to assist small- and medium-sized enterprises to prioritize and control risks of handling chemical products in their workplaces. The aim of the present study was to explore the accuracy of the Stoffenmanager exposure algorithm. This was done by comparing its semi-quantitative exposure rankings for specific substances with exposure measurements collected from several occupational settings to derive a quantitative exposure algorithm. Exposure data were collected using two strategies. First, we conducted seven surveys specifically for validation of the Stoffenmanager. Second, existing occupational exposure data sets were collected from various sources. This resulted in 378 and 320 measurements for solid and liquid scenarios, respectively. The Spearman correlation coefficients between Stoffenmanager scores and exposure measurements appeared to be good for handling solids (r(s) = 0.80, N = 378, P < 0.0001) and liquid scenarios (r(s) = 0.83, N = 320, P < 0.0001). However, the correlation for liquid scenarios appeared to be lower when calculated separately for sets of volatile substances with a vapour pressure >10 Pa (r(s) = 0.56, N = 104, P < 0.0001) and non-volatile substances with a vapour pressure < or =10 Pa (r(s) = 0.53, N = 216, P < 0.0001). The mixed-effect regression models with natural log-transformed Stoffenmanager scores as independent parameter explained a substantial part of the total exposure variability (52% for solid scenarios and 76% for liquid scenarios). Notwithstanding the good correlation, the data show substantial variability in exposure measurements given a certain Stoffenmanager score. The overall performance increases our confidence in the use of the Stoffenmanager as a generic tool for risk assessment. The mixed-effect regression models presented in this paper may be used for assessment of so-called reasonable worst case exposures. This evaluation is

  9. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals.

    PubMed

    Gobas, Frank A P C; Burkhard, Lawrence P; Doucette, William J; Sappington, Keith G; Verbruggen, Eric M J; Hope, Bruce K; Bonnell, Mark A; Arnot, Jon A; Tarazona, Jose V

    2016-01-01

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This article reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, invertebrate, mammal, and avian species and for entire terrestrial food webs, including some that consider spatial factors. Limitations and gaps in terrestrial bioaccumulation modeling include the lack of QSARs for biotransformation and dietary assimilation efficiencies for terrestrial species; the lack of models and QSARs for important terrestrial species such as insects, amphibians and reptiles; the lack of standardized testing protocols for plants with limited development of plant models; and the limited chemical domain of existing bioaccumulation models and QSARs (e.g., primarily applicable to nonionic organic chemicals). There is an urgent need for high-quality field data sets for validating models and assessing their performance. There is a need to improve coordination among laboratory, field, and modeling efforts on bioaccumulative substances in order to improve the state of the science for challenging substances. PMID:26272325

  10. Lessons learned from quantitative dynamical modeling in systems biology.

    PubMed

    Raue, Andreas; Schilling, Marcel; Bachmann, Julie; Matteson, Andrew; Schelker, Max; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D; Theis, Fabian J; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642

  11. Quantitative and logic modelling of gene and molecular networks

    PubMed Central

    Le Novère, Nicolas

    2015-01-01

    Behaviours of complex biomolecular systems are often irreducible to the elementary properties of their individual components. Explanatory and predictive mathematical models are therefore useful for fully understanding and precisely engineering cellular functions. The development and analyses of these models require their adaptation to the problems that need to be solved and the type and amount of available genetic or molecular data. Quantitative and logic modelling are among the main methods currently used to model molecular and gene networks. Each approach comes with inherent advantages and weaknesses. Recent developments show that hybrid approaches will become essential for further progress in synthetic biology and in the development of virtual organisms. PMID:25645874

  12. Sensitivity, noise and quantitative model of Laser Speckle Contrast Imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai

    In the dissertation, I present several studies on Laser Speckle Contrast Imaging (LSCI). The two major goals of those studies are: (1) to improve the signal-noise-ratio (SNR) of LSCI so it can be used to detect small blood flow change due to brain activities; (2) to find a reliable quantitative model so LSCI results can be compared among experiments and subjects and even with results from other blood flow monitoring techniques. We sought to improve SNR in the following ways: (1) We investigated the relationship between exposure time and the sensitivities of LSCI. We found that relative sensitivity reaches its maximum at an exposure time of around 5 ms. (2) We studied the relationship between laser speckle and camera aperture stop, which is actually the relationship between laser speckle and speckle/pixel size ratio. In general, speckle and pixel size should be approximately 1.5 - 2 to reach the maximum of detection factor beta as well as speckle contrast (SC) value and absolute sensitivity. This is also an important study for quantitative model development. (3) We worked on noise analysis and modeling. Noise affects both SNR and quantitative model. Usually random noise is more critical for SNR analysis. The main random noises in LSCI are statistical noise and physiological noise. Some physiological noises are caused by the small motions induced by heart beat or breathing. These are periodic and can be eliminated using methods discussed in this dissertation. Statistical noise is more fundamental and cannot be eliminated entirely. However it can be greatly reduced by increasing the effective pixel number N for speckle contrast processing. To develop the quantitative model, we did the following: (1) We considered more experimental factors in the quantitative model and removed several ideal case assumptions. In particular, in our model we considered the general detection factor beta, static scatterers and systematic noise. A simple calibration procedure is suggested

  13. Existence of almost periodic solution of a model of phytoplankton allelopathy with delay

    NASA Astrophysics Data System (ADS)

    Abbas, Syed; Mahto, Lakshman

    2012-09-01

    In this paper we discuss a non-autonomous two species competitive allelopathic phytoplankton model in which both species are producing chemical which stimulate the growth of each other. We have studied the existence and uniqueness of an almost periodic solution for the concerned model system. Sufficient conditions are derived for the existence of a unique almost periodic solution.

  14. Transgenic models of Alzheimer's disease: better utilization of existing models through viral transgenesis.

    PubMed

    Platt, Thomas L; Reeves, Valerie L; Murphy, M Paul

    2013-09-01

    Animal models have been used for decades in the Alzheimer's disease (AD) research field and have been crucial for the advancement of our understanding of the disease. Most models are based on familial AD mutations of genes involved in the amyloidogenic process, such as the amyloid precursor protein (APP) and presenilin 1 (PS1). Some models also incorporate mutations in tau (MAPT) known to cause frontotemporal dementia, a neurodegenerative disease that shares some elements of neuropathology with AD. While these models are complex, they fail to display pathology that perfectly recapitulates that of the human disease. Unfortunately, this level of pre-existing complexity creates a barrier to the further modification and improvement of these models. However, as the efficacy and safety of viral vectors improves, their use as an alternative to germline genetic modification is becoming a widely used research tool. In this review we discuss how this approach can be used to better utilize common mouse models in AD research. This article is part of a Special Issue entitled: Animal Models of Disease. PMID:23619198

  15. Quantitative modeling of facet development in ventifacts by sand abrasion

    NASA Astrophysics Data System (ADS)

    Várkonyi, Péter L.; Laity, Julie E.; Domokos, Gábor

    2016-03-01

    We use a quantitative model to examine rock abrasion by direct impacts of sand grains. Two distinct mechanisms are uncovered (unidirectional and isotropic), which contribute to the macro-scale morphological characters (sharp edges and flat facets) of ventifacts. It is found that facet formation under conditions of a unidirectional wind relies on certain mechanical properties of the rock material, and we confirm the dominant role of this mechanism in the formation of large ventifacts. Nevertheless small ventifacts may also be shaped to polyhedral shapes in a different way (isotropic mechanism), which is not sensitive to wind characteristics nor to rock material properties. The latter mechanism leads to several 'mature' shapes, which are surprisingly analogous to the morphologies of typical small ventifacts. Our model is also able to explain certain quantitative laboratory and field observations, including quick decay of facet angles of ventifacts followed by stabilization in the range 20-30°.

  16. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  17. Quantitative phase-field modeling of dendritic electrodeposition

    NASA Astrophysics Data System (ADS)

    Cogswell, Daniel A.

    2015-07-01

    A thin-interface phase-field model of electrochemical interfaces is developed based on Marcus kinetics for concentrated solutions, and used to simulate dendrite growth during electrodeposition of metals. The model is derived in the grand electrochemical potential to permit the interface to be widened to reach experimental length and time scales, and electroneutrality is formulated to eliminate the Debye length. Quantitative agreement is achieved with zinc Faradaic reaction kinetics, fractal growth dimension, tip velocity, and radius of curvature. Reducing the exchange current density is found to suppress the growth of dendrites, and screening electrolytes by their exchange currents is suggested as a strategy for controlling dendrite growth in batteries.

  18. Quantitative phase-field modeling of dendritic electrodeposition.

    PubMed

    Cogswell, Daniel A

    2015-07-01

    A thin-interface phase-field model of electrochemical interfaces is developed based on Marcus kinetics for concentrated solutions, and used to simulate dendrite growth during electrodeposition of metals. The model is derived in the grand electrochemical potential to permit the interface to be widened to reach experimental length and time scales, and electroneutrality is formulated to eliminate the Debye length. Quantitative agreement is achieved with zinc Faradaic reaction kinetics, fractal growth dimension, tip velocity, and radius of curvature. Reducing the exchange current density is found to suppress the growth of dendrites, and screening electrolytes by their exchange currents is suggested as a strategy for controlling dendrite growth in batteries. PMID:26274118

  19. Quantitative analysis of a wind energy conversion model

    NASA Astrophysics Data System (ADS)

    Zucker, Florian; Gräbner, Anna; Strunz, Andreas; Meyn, Jan-Peter

    2015-03-01

    A rotor of 12 cm diameter is attached to a precision electric motor, used as a generator, to make a model wind turbine. Output power of the generator is measured in a wind tunnel with up to 15 m s-1 air velocity. The maximum power is 3.4 W, the power conversion factor from kinetic to electric energy is cp = 0.15. The v3 power law is confirmed. The model illustrates several technically important features of industrial wind turbines quantitatively.

  20. Quantitative magnetospheric models derived from spacecraft magnetometer data

    NASA Technical Reports Server (NTRS)

    Mead, G. D.; Fairfield, D. H.

    1973-01-01

    Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.

  1. Strong existence and uniqueness of the stationary distribution for a stochastic inviscid dyadic model

    NASA Astrophysics Data System (ADS)

    Andreis, Luisa; Barbato, David; Collet, Francesca; Formentin, Marco; Provenzano, Luigi

    2016-03-01

    We consider an inviscid stochastically forced dyadic model, where the additive noise acts only on the first component. We prove that a strong solution for this problem exists and is unique by means of uniform energy estimates. Moreover, we exploit these results to establish strong existence and uniqueness of the stationary distribution.

  2. Quantitative modeling of transcription factor binding specificities using DNA shape.

    PubMed

    Zhou, Tianyin; Shen, Ning; Yang, Lin; Abe, Namiko; Horton, John; Mann, Richard S; Bussemaker, Harmen J; Gordân, Raluca; Rohs, Remo

    2015-04-14

    DNA binding specificities of transcription factors (TFs) are a key component of gene regulatory processes. Underlying mechanisms that explain the highly specific binding of TFs to their genomic target sites are poorly understood. A better understanding of TF-DNA binding requires the ability to quantitatively model TF binding to accessible DNA as its basic step, before additional in vivo components can be considered. Traditionally, these models were built based on nucleotide sequence. Here, we integrated 3D DNA shape information derived with a high-throughput approach into the modeling of TF binding specificities. Using support vector regression, we trained quantitative models of TF binding specificity based on protein binding microarray (PBM) data for 68 mammalian TFs. The evaluation of our models included cross-validation on specific PBM array designs, testing across different PBM array designs, and using PBM-trained models to predict relative binding affinities derived from in vitro selection combined with deep sequencing (SELEX-seq). Our results showed that shape-augmented models compared favorably to sequence-based models. Although both k-mer and DNA shape features can encode interdependencies between nucleotide positions of the binding site, using DNA shape features reduced the dimensionality of the feature space. In addition, analyzing the feature weights of DNA shape-augmented models uncovered TF family-specific structural readout mechanisms that were not revealed by the DNA sequence. As such, this work combines knowledge from structural biology and genomics, and suggests a new path toward understanding TF binding and genome function. PMID:25775564

  3. Quantitative structure property relationship modeling of excipient properties for prediction of formulation characteristics.

    PubMed

    Gaikwad, Vinod L; Bhatia, Neela M; Desai, Sujit A; Bhatia, Manish S

    2016-10-20

    Quantitative structure property relationship (QSPR) is used to relate the excipient descriptors with the formulation properties. A QSPR model is developed by regression analysis of selected descriptors contributing towards the targeted formulation properties. Developed QSPR model is validated by the true external method where it showed good accuracy and precision in predicting the formulation composition as experimental t90% (61.35min) is observed very close to predicted t90% (67.37min). Hence, QSPR approach saves resources by predicting drug release from an unformulated formulation; avoiding repetitive trials in the development of a new formulation and/or optimization of existing one. PMID:27474604

  4. The conceptual approach to quantitative modeling of guard cells

    PubMed Central

    Blatt, Michael R.; Hills, Adrian; Chen, Zhong-Hua; Wang, Yizhou; Papanatsiou, Maria; Lew, Vigilio L.

    2013-01-01

    Much of the 70% of global water usage associated with agriculture passes through stomatal pores of plant leaves. The guard cells, which regulate these pores, thus have a profound influence on photosynthetic carbon assimilation and water use efficiency of plants. We recently demonstrated how quantitative mathematical modeling of guard cells with the OnGuard modeling software yields detail sufficient to guide phenotypic and mutational analysis. This advance represents an all-important step toward applications in directing “reverse-engineering” of guard cell function for improved water use efficiency and carbon assimilation. OnGuard is nonetheless challenging for those unfamiliar with a modeler’s way of thinking. In practice, each model construct represents a hypothesis under test, to be discarded, validated or refined by comparisons between model predictions and experimental results. The few guidelines set out here summarize the standard and logical starting points for users of the OnGuard software. PMID:23221747

  5. A quantitative coarse-grain model for lipid bilayers.

    PubMed

    Orsi, Mario; Haubertin, David Y; Sanderson, Wendy E; Essex, Jonathan W

    2008-01-24

    A simplified particle-based computer model for hydrated phospholipid bilayers has been developed and applied to quantitatively predict the major physical features of fluid-phase biomembranes. Compared with available coarse-grain methods, three novel aspects are introduced. First, the main electrostatic features of the system are incorporated explicitly via charges and dipoles. Second, water is accurately (yet efficiently) described, on an individual level, by the soft sticky dipole model. Third, hydrocarbon tails are modeled using the anisotropic Gay-Berne potential. Simulations are conducted by rigid-body molecular dynamics. Our technique proves 2 orders of magnitude less demanding of computational resources than traditional atomic-level methodology. Self-assembled bilayers quantitatively reproduce experimental observables such as electron density, compressibility moduli, dipole potential, lipid diffusion, and water permeability. The lateral pressure profile has been calculated, along with the elastic curvature constants of the Helfrich expression for the membrane bending energy; results are consistent with experimental estimates and atomic-level simulation data. Several of the results presented have been obtained for the first time using a coarse-grain method. Our model is also directly compatible with atomic-level force fields, allowing mixed systems to be simulated in a multiscale fashion. PMID:18085766

  6. Quantitative comparison between crowd models for evacuation planning and evaluation

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.

    2014-02-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.

  7. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. PMID:24130119

  8. Three models intercomparison for Quantitative Precipitation Forecast over Calabria

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Lavagnini, A.; Accadia, C.; Mariani, S.; Casaioli, M.

    2004-11-01

    In the framework of the National Project “Sviluppo di distretti industriali per le Osservazioni della Terra” (Development of Industrial Districts for Earth Observations) funded by MIUR (Ministero dell'Università e della Ricerca Scientifica --Italian Ministry of the University and Scientific Research) two operational mesoscale models were set-up for Calabria, the southernmost tip of the Italian peninsula. Models are RAMS (Regional Atmospheric Modeling System) and MM5 (Mesoscale Modeling 5) that are run every day at Crati scrl to produce weather forecast over Calabria (http://www.crati.it). This paper reports model intercomparison for Quantitative Precipitation Forecast evaluated for a 20 month period from 1th October 2000 to 31th May 2002. In addition to RAMS and MM5 outputs, QBOLAM rainfall fields are available for the period selected and included in the comparison. This model runs operationally at “Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici”. Forecasts are verified comparing models outputs with raingauge data recorded by the regional meteorological network, which has 75 raingauges. Large-scale forcing is the same for all models considered and differences are due to physical/numerical parameterizations and horizontal resolutions. QPFs show differences between models. Largest differences are for BIA compared to the other considered scores. Performances decrease with increasing forecast time for RAMS and MM5, whilst QBOLAM scores better for second day forecast.

  9. Quantitative model of magnetic coupling between solar wind and magnetosphere

    NASA Technical Reports Server (NTRS)

    Toffoletto, F. R.; Hill, T. W.

    1986-01-01

    Preliminary results are presented of a quantitative three-dimensional model of an open steady-state magnetosphere configuration incorporating a normal-component distribution corresponding to the subsolar merging-line hypothesis. The distribution of the normal magnetic-field component at the magnetopause is used as input and is used to calculate an interconnection magnetic field that links the internal and external fields. The interconnected field is then used to map the solar-wind electric field onto the polar cap. The resulting polar-cap flow patterns are found to be in agreement with observations.

  10. Quantitative Modeling of Single Atom High Harmonic Generation

    SciTech Connect

    Gordon, Ariel; Kaertner, Franz X.

    2005-11-25

    It is shown by comparison with numerical solutions of the Schroedinger equation that the three step model (TSM) of high harmonic generation (HHG) can be improved to give a quantitatively reliable description of the process. Excellent agreement is demonstrated for the H atom and the H{sub 2}{sup +} molecular ion. It is shown that the standard TSM heavily distorts the HHG spectra, especially of H{sub 2}{sup +}, and an explanation is presented for this behavior. Key to the improvement is the use of the Ehrenfest theorem in the TSM.

  11. A Team Mental Model Perspective of Pre-Quantitative Risk

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  12. Quantitative modeling of soil sorption for xenobiotic chemicals

    SciTech Connect

    Sabljic, A. )

    1989-11-01

    Experimentally determining soil sorption behavior of xenobiotic chemicals during the last 10 years has been costly, time-consuming, and very tedious. Since an estimated 100,000 chemicals are currently in common use and new chemicals are registered at a rate of 1000 per year, it is obvious that our human and material resources are insufficient to experimentally obtain their soil sorption data. Much work is being done to find alternative methods that will enable us to accurately and rapidly estimate the soil sorption coefficients of pesticides and other classes of organic pollutants. Empirical models, based on water solubility and n-octanol/water partition coefficients, have been proposed as alternative, accurate methods to estimate soil sorption coefficients. An analysis of the models has shown (a) low precision of water solubility and n-octanol/water partition data, (b) varieties of quantitative models describing the relationship between the soil sorption and above-mentioned properties, and (c) violations of some basic statistical laws when these quantitative models were developed. During the last 5 years considerable efforts were made to develop nonempirical models that are free of errors imminent to all models based on empirical variables. Thus far molecular topology has been shown to be the most successful structural property for describing and predicting soil sorption coefficients. The first-order molecular connectivity index was demonstrated to correlate extremely well with the soil sorption coefficients of polycyclic aromatic hydrocarbons (PAHs), alkylbenzenes, chlorobenzenes, chlorinated alkanes and alkenes, heterocyclic and heterosubstituted PAHs, and halogenated phenols. The average difference between predicted and observed soil sorption coefficients is only 0.2 on the logarithmic scale (corresponding to a factor of 1.5). 63 references.

  13. Quantitative modeling of soil sorption for xenobiotic chemicals.

    PubMed Central

    Sabljić, A

    1989-01-01

    Experimentally determining soil sorption behavior of xenobiotic chemicals during the last 10 years has been costly, time-consuming, and very tedious. Since an estimated 100,000 chemicals are currently in common use and new chemicals are registered at a rate of 1000 per year, it is obvious that our human and material resources are insufficient to experimentally obtain their soil sorption data. Much work is being done to find alternative methods that will enable us to accurately and rapidly estimate the soil sorption coefficients of pesticides and other classes of organic pollutants. Empirical models, based on water solubility and n-octanol/water partition coefficients, have been proposed as alternative, accurate methods to estimate soil sorption coefficients. An analysis of the models has shown (a) low precision of water solubility and n-octanol/water partition data, (b) varieties of quantitative models describing the relationship between the soil sorption and above-mentioned properties, and (c) violations of some basic statistical laws when these quantitative models were developed. During the last 5 years considerable efforts were made to develop nonempirical models that are free of errors imminent to all models based on empirical variables. Thus far molecular topology has been shown to be the most successful structural property for describing and predicting soil sorption coefficients. The first-order molecular connectivity index was demonstrated to correlate extremely well with the soil sorption coefficients of polycyclic aromatic hydrocarbons (PAHs), alkylbenzenes, chlorobenzenes, chlorinated alkanes and alkenes, heterocyclic and heterosubstituted PAHs, and halogenated phenols. The average difference between predicted and observed soil sorption coefficients is only 0.2 on the logarithmic scale (corresponding to a factor of 1.5). A comparison of the molecular connectivity model with the empirical models described earlier shows that the former is superior in

  14. A quantitative evaluation of models for Aegean crustal deformation

    NASA Astrophysics Data System (ADS)

    Nyst, M.; Thatcher, W.

    2003-04-01

    Modeling studies of eastern Mediterranean tectonics show that Aegean deformation is mainly determined by WSW directed expulsion of Anatolia and SW directed extension due to roll-back of African lithosphere along the Hellenic trench. How motion is transferred across the Aegean remains a subject of debate. The two most widely used hypotheses for Aegean tectonics assert fundamentally different mechanisms. The first model describes deformation as a result of opposing rotations of two rigid microplates separated by a zone of extension. In the second model most motion is accommodated by shear on a series of dextral faults and extension on graben systems. These models make different quantitative predictions for the crustal deformation field that can be tested by a new, spatially dense GPS velocity data set. To convert the GPS data into crustal deformation parameters we use different methods to model complementary aspects of crustal deformation. We parameterize the main fault and plate boundary structures of both models and produce representations for the crustal deformation field that range from purely rigid rotations of microplates, via interacting, elastically deforming blocks separated by crustal faults to a continuous velocity gradient field. Critical evaluation of these models indicates strengths and limitations of each and suggests new measurements for further refining understanding of present-day Aegean tectonics.

  15. A QUANTITATIVE MODEL OF ERROR ACCUMULATION DURING PCR AMPLIFICATION

    PubMed Central

    Pienaar, E; Theron, M; Nelson, M; Viljoen, HJ

    2006-01-01

    The amplification of target DNA by the polymerase chain reaction (PCR) produces copies which may contain errors. Two sources of errors are associated with the PCR process: (1) editing errors that occur during DNA polymerase-catalyzed enzymatic copying and (2) errors due to DNA thermal damage. In this study a quantitative model of error frequencies is proposed and the role of reaction conditions is investigated. The errors which are ascribed to the polymerase depend on the efficiency of its editing function as well as the reaction conditions; specifically the temperature and the dNTP pool composition. Thermally induced errors stem mostly from three sources: A+G depurination, oxidative damage of guanine to 8-oxoG and cytosine deamination to uracil. The post-PCR modifications of sequences are primarily due to exposure of nucleic acids to elevated temperatures, especially if the DNA is in a single-stranded form. The proposed quantitative model predicts the accumulation of errors over the course of a PCR cycle. Thermal damage contributes significantly to the total errors; therefore consideration must be given to thermal management of the PCR process. PMID:16412692

  16. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code

    NASA Technical Reports Server (NTRS)

    Freeh, Josh

    2003-01-01

    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  17. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  18. Modeling logistic performance in quantitative microbial risk assessment.

    PubMed

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety. PMID:20055976

  19. Quantitative Analysis of Cancer Metastasis using an Avian Embryo Model

    PubMed Central

    Palmer, Trenis D.; Lewis, John; Zijlstra, Andries

    2011-01-01

    During metastasis cancer cells disseminate from the primary tumor, invade into surrounding tissues, and spread to distant organs. Metastasis is a complex process that can involve many tissue types, span variable time periods, and often occur deep within organs, making it difficult to investigate and quantify. In addition, the efficacy of the metastatic process is influenced by multiple steps in the metastatic cascade making it difficult to evaluate the contribution of a single aspect of tumor cell behavior. As a consequence, metastasis assays are frequently performed in experimental animals to provide a necessarily realistic context in which to study metastasis. Unfortunately, these models are further complicated by their complex physiology. The chick embryo is a unique in vivo model that overcomes many limitations to studying metastasis, due to the accessibility of the chorioallantoic membrane (CAM), a well-vascularized extra-embryonic tissue located underneath the eggshell that is receptive to the xenografting of tumor cells (figure 1). Moreover, since the chick embryo is naturally immunodeficient, the CAM readily supports the engraftment of both normal and tumor tissues. Most importantly, the avian CAM successfully supports most cancer cell characteristics including growth, invasion, angiogenesis, and remodeling of the microenvironment. This makes the model exceptionally useful for the investigation of the pathways that lead to cancer metastasis and to predict the response of metastatic cancer to new potential therapeutics. The detection of disseminated cells by species-specific Alu PCR makes it possible to quantitatively assess metastasis in organs that are colonized by as few as 25 cells. Using the Human Epidermoid Carcinoma cell line (HEp3) we use this model to analyze spontaneous metastasis of cancer cells to distant organs, including the chick liver and lung. Furthermore, using the Alu-PCR protocol we demonstrate the sensitivity and reproducibility of the

  20. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    NASA Astrophysics Data System (ADS)

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-01

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model [1], using the finite-difference electromagnetic Vorpal/Vsim software [2]. This model has been augmented with a non-linear rf-sheath sub-grid model [3], which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves [4] in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques [5] to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion

  1. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    SciTech Connect

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-12

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model, using the finite-difference electromagnetic Vorpal/Vsim software. This model has been augmented with a non-linear rf-sheath sub-grid model, which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion, existing at the

  2. A quantitative evaluation of the AVITEWRITE model of handwriting learning.

    PubMed

    Paine, R W; Grossberg, S; Van Gemmert, A W A

    2004-12-01

    Much sensory-motor behavior develops through imitation, as during the learning of handwriting by children. Such complex sequential acts are broken down into distinct motor control synergies, or muscle groups, whose activities overlap in time to generate continuous, curved movements that obey an inverse relation between curvature and speed. The adaptive vector integration to endpoint handwriting (AVITEWRITE) model of Grossberg and Paine (2000) [A neural model of corticocerebellar interactions during attentive imitation and predictive learning of sequential handwriting movements. Neural Networks, 13, 999-1046] addressed how such complex movements may be learned through attentive imitation. The model suggested how parietal and motor cortical mechanisms, such as difference vector encoding, interact with adaptively-timed, predictive cerebellar learning during movement imitation and predictive performance. Key psychophysical and neural data about learning to make curved movements were simulated, including a decrease in writing time as learning progresses; generation of unimodal, bell-shaped velocity profiles for each movement synergy; size scaling with isochrony, and speed scaling with preservation of the letter shape and the shapes of the velocity profiles; an inverse relation between curvature and tangential velocity; and a two-thirds power law relation between angular velocity and curvature. However, the model learned from letter trajectories of only one subject, and only qualitative kinematic comparisons were made with previously published human data. The present work describes a quantitative test of AVITEWRITE through direct comparison of a corpus of human handwriting data with the model's performance when it learns by tracing the human trajectories. The results show that model performance was variable across the subjects, with an average correlation between the model and human data of 0.89+/-0.10. The present data from simulations using the AVITEWRITE model

  3. Existence of vortices in a self-dual gauged linear sigma model and its singular limit

    NASA Astrophysics Data System (ADS)

    Kim, Namkwon

    2006-03-01

    We study rigorously the static (2 + 1)D gauged linear sigma model introduced by Schroers. Analysing the governing system of partial differential equations, we show the existence of energy finite vortices under the partially broken symmetry on R2 with some conditions consistent with the necessary conditions given by Yang. Also, with a special choice of representation, we show that the gauged O(3) sigma model is a singular limit of the gauged linear sigma model.

  4. Quantitative model of the growth of floodplains by vertical accretion

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2000-01-01

    A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.

  5. Adapting existing models of highly contagious diseases to countries other than their country of origin.

    PubMed

    Dubé, C; Sanchez, J; Reeves, A

    2011-08-01

    Many countries do not have the resources to develop epidemiological models of animal diseases. As a result, it is tempting to use models developed in other countries. However, an existing model may need to be adapted in order for it to be appropriately applied in a country, region, or situation other than that for which it was originally developed. The process of adapting a model has a number of benefits for both model builders and model users. For model builders, it provides insight into the applicability of their model and potentially the opportunity to obtain data for operational validation of components of their model. For users, it is a chance to think about the infection transmission process in detail, to review the data available for modelling, and to learn the principles of epidemiological modelling. Various issues must be addressed when considering adapting a model. Most critically, the assumptions and purpose behind the model must be thoroughly understood, so that new users can determine its suitability for their situation. The process of adapting a model might simply involve changing existing model parameter values (for example, to better represent livestock demographics in a country or region), or might require more substantial (and more labour-intensive) changes to the model code and conceptual model. Adapting a model is easier if the model has a user-friendly interface and easy-to-read user documentation. In addition, models built as frameworks within which disease processes and livestock demographics and contacts are flexible are good candidates for technology transfer projects, which lead to long-term collaborations. PMID:21961228

  6. A Pleiotropic Nonadditive Model of Variation in Quantitative Traits

    PubMed Central

    Caballero, A.; Keightley, P. D.

    1994-01-01

    A model of mutation-selection-drift balance incorporating pleiotropic and dominance effects of new mutations on quantitative traits and fitness is investigated and used to predict the amount and nature of genetic variation maintained in segregating populations. The model is based on recent information on the joint distribution of mutant effects on bristle traits and fitness in Drosophila melanogaster from experiments on the accumulation of spontaneous and P element-induced mutations. These experiments suggest a leptokurtic distribution of effects with an intermediate correlation between effects on the trait and fitness. Mutants of large effect tend to be partially recessive while those with smaller effect are on average additive, but apparently with very variable gene action. The model is parameterized with two different sets of information derived from P element insertion and spontaneous mutation data, though the latter are not fully known. They differ in the number of mutations per generation which is assumed to affect the trait. Predictions of the variance maintained for bristle number assuming parameters derived from effects of P element insertions, in which the proportion of mutations with an effect on the trait is small, fit reasonably well with experimental observations. The equilibrium genetic variance is nearly independent of the degree of dominance of new mutations. Heritabilities of between 0.4 and 0.6 are predicted with population sizes from 10(4) to 10(6), and most of the variance for the metric trait in segregating populations is due to a small proportion of mutations (about 1% of the total number) with neutral or nearly neutral effects on fitness and intermediate effects on the trait (0.1-0.5σ(P)). Much of the genetic variance is contributed by recessive or partially recessive mutants, but only a small proportion (about 10%) of the genetic variance is dominance variance. The amount of apparent selection on the trait itself generated by the model is

  7. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    PubMed

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  8. Towards Quantitative Spatial Models of Seabed Sediment Composition

    PubMed Central

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom’s parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  9. Sensitivity of quantitative sensory models to morphine analgesia in humans

    PubMed Central

    Olesen, Anne Estrup; Brock, Christina; Sverrisdóttir, Eva; Larsen, Isabelle Myriam; Drewes, Asbjørn Mohr

    2014-01-01

    Introduction Opioid analgesia can be explored with quantitative sensory testing, but most investigations have used models of phasic pain, and such brief stimuli may be limited in the ability to faithfully simulate natural and clinical painful experiences. Therefore, identification of appropriate experimental pain models is critical for our understanding of opioid effects with the potential to improve treatment. Objectives The aim was to explore and compare various pain models to morphine analgesia in healthy volunteers. Methods The study was a double-blind, randomized, two-way crossover study. Thirty-nine healthy participants were included and received morphine 30 mg (2 mg/mL) as oral solution or placebo. To cover both tonic and phasic stimulations, a comprehensive multi-modal, multi-tissue pain-testing program was performed. Results Tonic experimental pain models were sensitive to morphine analgesia compared to placebo: muscle pressure (F=4.87, P=0.03), bone pressure (F=3.98, P=0.05), rectal pressure (F=4.25, P=0.04), and the cold pressor test (F=25.3, P<0.001). Compared to placebo, morphine increased tolerance to muscle stimulation by 14.07%; bone stimulation by 9.72%; rectal mechanical stimulation by 20.40%, and reduced pain reported during the cold pressor test by 9.14%. In contrast, the more phasic experimental pain models were not sensitive to morphine analgesia: skin heat, rectal electrical stimulation, or rectal heat stimulation (all P>0.05). Conclusion Pain models with deep tonic stimulation including C fiber activation and and/or endogenous pain modulation were more sensitive to morphine analgesia. To avoid false negative results in future studies, we recommend inclusion of reproducible tonic pain models in deep tissues, mimicking clinical pain to a higher degree. PMID:25525384

  10. Discrete modeling of hydraulic fracturing processes in a complex pre-existing fracture network

    NASA Astrophysics Data System (ADS)

    Kim, K.; Rutqvist, J.; Nakagawa, S.; Houseworth, J. E.; Birkholzer, J. T.

    2015-12-01

    Hydraulic fracturing and stimulation of fracture networks are widely used by the energy industry (e.g., shale gas extraction, enhanced geothermal systems) to increase permeability of geological formations. Numerous analytical and numerical models have been developed to help understand and predict the behavior of hydraulically induced fractures. However, many existing models assume simple fracturing scenarios with highly idealized fracture geometries (e.g., propagation of a single fracture with assumed shapes in a homogeneous medium). Modeling hydraulic fracture propagation in the presence of natural fractures and homogeneities can be very challenging because of the complex interactions between fluid, rock matrix, and rock interfaces, as well as the interactions between propagating fractures and pre-existing natural fractures. In this study, the TOUGH-RBSN code for coupled hydro-mechanical modeling is utilized to simulate hydraulic fracture propagation and its interaction with pre-existing fracture networks. The simulation tool combines TOUGH2, a simulator of subsurface multiphase flow and mass transport based on the finite volume approach, with the implementation of a lattice modeling approach for geomechanical and fracture-damage behavior, named Rigid-Body-Spring Network (RBSN). The discrete fracture network (DFN) approach is facilitated in the Voronoi discretization via a fully automated modeling procedure. The numerical program is verified through a simple simulation for single fracture propagation, in which the resulting fracture geometry is compared to an analytical solution for given fracture length and aperture. Subsequently, predictive simulations are conducted for planned laboratory experiments using rock-analogue (soda-lime glass) samples containing a designed, pre-existing fracture network. The results of a preliminary simulation demonstrate selective fracturing and fluid infiltration along the pre-existing fractures, with additional fracturing in part

  11. Monitoring with Trackers Based on Semi-Quantitative Models

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1997-01-01

    In three years of NASA-sponsored research preceding this project, we successfully developed a technology for: (1) building qualitative and semi-quantitative models from libraries of model-fragments, (2) simulating these models to predict future behaviors with the guarantee that all possible behaviors are covered, (3) assimilating observations into behaviors, shrinking uncertainty so that incorrect models are eventually refuted and correct models make stronger predictions for the future. In our object-oriented framework, a tracker is an object which embodies the hypothesis that the available observation stream is consistent with a particular behavior of a particular model. The tracker maintains its own status (consistent, superceded, or refuted), and answers questions about its explanation for past observations and its predictions for the future. In the MIMIC approach to monitoring of continuous systems, a number of trackers are active in parallel, representing alternate hypotheses about the behavior of a system. This approach is motivated by the need to avoid 'system accidents' [Perrow, 1985] due to operator fixation on a single hypothesis, as for example at Three Mile Island. As we began to address these issues, we focused on three major research directions that we planned to pursue over a three-year project: (1) tractable qualitative simulation, (2) semiquantitative inference, and (3) tracking set management. Unfortunately, funding limitations made it impossible to continue past year one. Nonetheless, we made major progress in the first two of these areas. Progress in the third area as slower because the graduate student working on that aspect of the project decided to leave school and take a job in industry. I enclosed a set of abstract of selected papers on the work describe below. Several papers that draw on the research supported during this period appeared in print after the grant period ended.

  12. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  13. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  14. Quantitative determination of guggulsterone in existing natural populations of Commiphora wightii (Arn.) Bhandari for identification of germplasm having higher guggulsterone content.

    PubMed

    Kulhari, Alpana; Sheorayan, Arun; Chaudhury, Ashok; Sarkar, Susheel; Kalia, Rajwant K

    2015-01-01

    Guggulsterone is an aromatic steroidal ketonic compound obtained from vertical rein ducts and canals of bark of Commiphora wightii (Arn.) Bhandari (Family - Burseraceae). Owing to its multifarious medicinal and therapeutic values as well as its various other significant bioactivities, guggulsterone has high demand in pharmaceutical, perfumery and incense industries. More and more pharmaceutical and perfumery industries are showing interest in guggulsterone, therefore, there is a need for its quantitative determination in existing natural populations of C. wightii. Identification of elite germplasm having higher guggulsterone content can be multiplied through conventional or biotechnological means. In the present study an effort was made to estimate two isoforms of guggulsterone i.e. E and Z guggulsterone in raw exudates of 75 accessions of C. wightii collected from three states of North-western India viz. Rajasthan (19 districts), Haryana (4 districts) and Gujarat (3 districts). Extracted steroid rich fraction from stem samples was fractionated using reverse-phase preparative High Performance Liquid Chromatography (HPLC) coupled with UV/VIS detector operating at wavelength of 250 nm. HPLC analysis of stem samples of wild as well as cultivated plants showed that the concentration of E and Z isomers as well as total guggulsterone was highest in Rajasthan, as compared to Haryana and Gujarat states. Highest concentration of E guggulsterone (487.45 μg/g) and Z guggulsterone (487.68 μg/g) was found in samples collected from Devikot (Jaisalmer) and Palana (Bikaner) respectively, the two hyper-arid regions of Rajasthan, India. Quantitative assay was presented on the basis of calibration curve obtained from a mixture of standard E and Z guggulsterones with different validatory parameters including linearity, selectivity and specificity, accuracy, auto-injector, flow-rate, recoveries, limit of detection and limit of quantification (as per norms of International

  15. Quantitative rubber sheet models of gravitation wells using Spandex

    NASA Astrophysics Data System (ADS)

    White, Gary

    2008-04-01

    Long a staple of introductory treatments of general relativity, the rubber sheet model exhibits Wheeler's concise summary---``Matter tells space-time how to curve and space-time tells matter how to move''---very nicely. But what of the quantitative aspects of the rubber sheet model: how far can the analogy be pushed? We show^1 that when a mass M is suspended from the center of an otherwise unstretched elastic sheet affixed to a circular boundary it exhibits a distortion far from the center given by h = A*(M*r^2)^1/3 . Here, as might be expected, h and r are the vertical and axial distances from the center, but this result is not the expected logarithmic form of 2-D solutions to LaPlace's equation (the stretched drumhead). This surprise has a natural explanation and is confirmed experimentally with Spandex as the medium, and its consequences for general rubber sheet models are pursued. ^1``The shape of `the Spandex' and orbits upon its surface'', American Journal of Physics, 70, 48-52 (2002), G. D. White and M. Walker. See also the comment by Don S. Lemons and T. C. Lipscombe, also in AJP, 70, 1056-1058 (2002).

  16. Existence of global weak solution for a reduced gravity two and a half layer model

    SciTech Connect

    Guo, Zhenhua Li, Zilai Yao, Lei

    2013-12-15

    We investigate the existence of global weak solution to a reduced gravity two and a half layer model in one-dimensional bounded spatial domain or periodic domain. Also, we show that any possible vacuum state has to vanish within finite time, then the weak solution becomes a unique strong one.

  17. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  18. Existence of standard models of conic fibrations over non-algebraically-closed fields

    SciTech Connect

    Avilov, A A

    2014-12-31

    We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.

  19. Global existence for a model of inhomogeneous incompressible elastodynamics in 2D

    NASA Astrophysics Data System (ADS)

    Yin, Silu

    2016-05-01

    In this paper, we investigate a model of incompressible, isotropic, inhomogeneous elastodynamics in two space dimensions, inspired by Lei in [18]. We prove the global existence for this Cauchy problem with sufficiently small initial displacement and small density disturbance around constant.

  20. Existence of Limit Cycles in the Solow Model with Delayed-Logistic Population Growth

    PubMed Central

    2014-01-01

    This paper is devoted to the existence and stability analysis of limit cycles in a delayed mathematical model for the economy growth. Specifically the Solow model is further improved by inserting the time delay into the logistic population growth rate. Moreover, by choosing the time delay as a bifurcation parameter, we prove that the system loses its stability and a Hopf bifurcation occurs when time delay passes through critical values. Finally, numerical simulations are carried out for supporting the analytical results. PMID:24592147

  1. Leveraging an existing data warehouse to annotate workflow models for operations research and optimization.

    PubMed

    Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O

    2008-01-01

    Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse. PMID:18999220

  2. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    NASA Technical Reports Server (NTRS)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  3. Quantitative Modeling of the Alternative Pathway of the Complement System

    PubMed Central

    Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection. PMID

  4. Quantitative Modeling of the Alternative Pathway of the Complement System.

    PubMed

    Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection. PMID

  5. Quantitative modeling of the terminal differentiation of B cells and mechanisms of lymphomagenesis

    PubMed Central

    Martínez, María Rodríguez; Corradin, Alberto; Klein, Ulf; Álvarez, Mariano Javier; Toffolo, Gianna M.; di Camillo, Barbara; Califano, Andrea; Stolovitzky, Gustavo A.

    2012-01-01

    Mature B-cell exit from germinal centers is controlled by a transcriptional regulatory module that integrates antigen and T-cell signals and, ultimately, leads to terminal differentiation into memory B cells or plasma cells. Despite a compact structure, the module dynamics are highly complex because of the presence of several feedback loops and self-regulatory interactions, and understanding its dysregulation, frequently associated with lymphomagenesis, requires robust dynamical modeling techniques. We present a quantitative kinetic model of three key gene regulators, BCL6, IRF4, and BLIMP, and use gene expression profile data from mature human B cells to determine appropriate model parameters. The model predicts the existence of two different hysteresis cycles that direct B cells through an irreversible transition toward a differentiated cellular state. By synthetically perturbing the interactions in this network, we can elucidate known mechanisms of lymphomagenesis and suggest candidate tumorigenic alterations, indicating that the model is a valuable quantitative tool to simulate B-cell exit from the germinal center under a variety of physiological and pathological conditions. PMID:22308355

  6. Quantitative dual-probe microdialysis: mathematical model and analysis.

    PubMed

    Chen, Kevin C; Höistad, Malin; Kehr, Jan; Fuxe, Kjell; Nicholson, Charles

    2002-04-01

    Steady-state microdialysis is a widely used technique to monitor the concentration changes and distributions of substances in tissues. To obtain more information about brain tissue properties from microdialysis, a dual-probe approach was applied to infuse and sample the radiotracer, [3H]mannitol, simultaneously both in agar gel and in the rat striatum. Because the molecules released by one probe and collected by the other must diffuse through the interstitial space, the concentration profile exhibits dynamic behavior that permits the assessment of the diffusion characteristics in the brain extracellular space and the clearance characteristics. In this paper a mathematical model for dual-probe microdialysis was developed to study brain interstitial diffusion and clearance processes. Theoretical expressions for the spatial distribution of the infused tracer in the brain extracellular space and the temporal concentration at the probe outlet were derived. A fitting program was developed using the simplex algorithm, which finds local minima of the standard deviations between experiments and theory by adjusting the relevant parameters. The theoretical curves accurately fitted the experimental data and generated realistic diffusion parameters, implying that the mathematical model is capable of predicting the interstitial diffusion behavior of [3H]mannitol and that it will be a valuable quantitative tool in dual-probe microdialysis. PMID:12067242

  7. Quantitative Model of microRNA-mRNA interaction

    NASA Astrophysics Data System (ADS)

    Noorbakhsh, Javad; Lang, Alex; Mehta, Pankaj

    2012-02-01

    MicroRNAs are short RNA sequences that regulate gene expression and protein translation by binding to mRNA. Experimental data reveals the existence of a threshold linear output of protein based on the expression level of microRNA. To understand this behavior, we propose a mathematical model of the chemical kinetics of the interaction between mRNA and microRNA. Using this model we have been able to quantify the threshold linear behavior. Furthermore, we have studied the effect of internal noise, showing the existence of an intermediary regime where the expression level of mRNA and microRNA has the same order of magnitude. In this crossover regime the mRNA translation becomes sensitive to small changes in the level of microRNA, resulting in large fluctuations in protein levels. Our work shows that chemical kinetics parameters can be quantified by studying protein fluctuations. In the future, studying protein levels and their fluctuations can provide a powerful tool to study the competing endogenous RNA hypothesis (ceRNA), in which mRNA crosstalk occurs due to competition over a limited pool of microRNAs.

  8. Quantitative comparisons of analogue models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  9. Quantitative phase-field modeling for boiling phenomena

    NASA Astrophysics Data System (ADS)

    Badillo, Arnoldo

    2012-10-01

    A phase-field model is developed for quantitative simulation of bubble growth in the diffusion-controlled regime. The model accounts for phase change and surface tension effects at the liquid-vapor interface of pure substances with large property contrast. The derivation of the model follows a two-fluid approach, where the diffuse interface is assumed to have an internal microstructure, defined by a sharp interface. Despite the fact that phases within the diffuse interface are considered to have their own velocities and pressures, an averaging procedure at the atomic scale, allows for expressing all the constitutive equations in terms of mixture quantities. From the averaging procedure and asymptotic analysis of the model, nonconventional terms appear in the energy and phase-field equations to compensate for the variation of the properties across the diffuse interface. Without these new terms, no convergence towards the sharp-interface model can be attained. The asymptotic analysis also revealed a very small thermal capillary length for real fluids, such as water, that makes impossible for conventional phase-field models to capture bubble growth in the millimeter range size. For instance, important phenomena such as bubble growth and detachment from a hot surface could not be simulated due to the large number of grids points required to resolve all the scales. Since the shape of the liquid-vapor interface is primarily controlled by the effects of an isotropic surface energy (surface tension), a solution involving the elimination of the curvature from the phase-field equation is devised. The elimination of the curvature from the phase-field equation changes the length scale dominating the phase change from the thermal capillary length to the thickness of the thermal boundary layer, which is several orders of magnitude larger. A detailed analysis of the phase-field equation revealed that a split of this equation into two independent parts is possible for system sizes

  10. Quantitative property-structural relation modeling on polymeric dielectric materials

    NASA Astrophysics Data System (ADS)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  11. Quantitative phase-field modeling for wetting phenomena.

    PubMed

    Badillo, Arnoldo

    2015-03-01

    A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates. PMID:25871200

  12. Quantitative modeling of fluorescent emission in photonic crystals

    NASA Astrophysics Data System (ADS)

    Gutmann, Johannes; Zappe, Hans; Goldschmidt, Jan Christoph

    2013-11-01

    Photonic crystals affect the photon emission of embedded emitters due to an altered local density of photon states (LDOS). We review the calculation of the LDOS from eigenmodes in photonic crystals and propose a rate equation model for fluorescent emitters to determine the changes in emission induced by the LDOS. We show how to calculate the modifications of three experimentally accessible characteristics: emission spectrum (spectral redistribution), emitter quantum yield, and fluorescence lifetime. As an example, we present numerical results for the emission of the dye Rhodamine B inside an opal photonic crystal. For such photonic crystals with small permittivity contrast, the LDOS is only weakly modified, resulting in rather small changes. We point out that in experiments, however, usually only part of the emitted light is detected, which can have a very different spectral distribution (e.g., due to a photonic band gap in the direction of detection). We demonstrate the calculation of this detected spectrum for a typical measurement setup. With this reasoning, we explain the previously not fully understood experimental observation that strong spectral modifications occurred, while at the same time only small changes in lifetime were found. With our approach, the mentioned effects can be quantitatively calculated for fluorescent emitters in any photonic crystal.

  13. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.

    PubMed

    Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-04-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes. PMID:27104857

  14. Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model

    SciTech Connect

    Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.

    2011-10-01

    We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.

  15. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits

    PubMed Central

    Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-01-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI’s Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes. PMID:27104857

  16. Existence of traveling wave solutions in a diffusive predator-prey model.

    PubMed

    Huang, Jianhua; Lu, Gang; Ruan, Shigui

    2003-02-01

    We establish the existence of traveling front solutions and small amplitude traveling wave train solutions for a reaction-diffusion system based on a predator-prey model with Holling type-II functional response. The traveling front solutions are equivalent to heteroclinic orbits in R(4) and the small amplitude traveling wave train solutions are equivalent to small amplitude periodic orbits in R(4). The methods used to prove the results are the shooting argument and the Hopf bifurcation theorem. PMID:12567231

  17. An overview of existing modeling tools making use of model checking in the analysis of biochemical networks

    PubMed Central

    Carrillo, Miguel; Góngora, Pedro A.; Rosenblueth, David A.

    2012-01-01

    Model checking is a well-established technique for automatically verifying complex systems. Recently, model checkers have appeared in computer tools for the analysis of biochemical (and gene regulatory) networks. We survey several such tools to assess the potential of model checking in computational biology. Next, our overview focuses on direct applications of existing model checkers, as well as on algorithms for biochemical network analysis influenced by model checking, such as those using binary decision diagrams (BDDs) or Boolean-satisfiability solvers. We conclude with advantages and drawbacks of model checking for the analysis of biochemical networks. PMID:22833747

  18. Normal fault growth above pre-existing structures: insights from discrete element modelling

    NASA Astrophysics Data System (ADS)

    Wrona, Thilo; Finch, Emma; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Phillips, Thomas

    2016-04-01

    In extensional systems, pre-existing structures such as shear zones may affect the growth, geometry and location of normal faults. Recent seismic reflection-based observations from the North Sea suggest that shear zones not only localise deformation in the host rock, but also in the overlying sedimentary succession. While pre-existing weaknesses are known to localise deformation in the host rock, their effect on deformation in the overlying succession is less well understood. Here, we use 3-D discrete element modelling to determine if and how kilometre-scale shear zones affect normal fault growth in the overlying succession. Discrete element models use a large number of interacting particles to describe the dynamic evolution of complex systems. The technique has therefore been applied to describe fault and fracture growth in a variety of geological settings. We model normal faulting by extending a 60×60×30 km crustal rift-basin model including brittle and ductile interactions and gravitation and isostatic forces by 30%. An inclined plane of weakness which represents a pre-existing shear zone is introduced in the lower section of the upper brittle layer at the start of the experiment. The length, width, orientation and dip of the weak zone are systematically varied between experiments to test how these parameters control the geometric and kinematic development of overlying normal fault systems. Consistent with our seismic reflection-based observations, our results show that strain is indeed localised in and above these weak zones. In the lower brittle layer, normal faults nucleate, as expected, within the zone of weakness and control the initiation and propagation of neighbouring faults. Above this, normal faults nucleate throughout the overlying strata where their orientations are strongly influenced by the underlying zone of weakness. These results challenge the notion that overburden normal faults simply form due to reactivation and upwards propagation of pre-existing

  19. Uncertainty in Quantitative Precipitation Estimates and Forecasts in a Hydrologic Modeling Context (Invited)

    NASA Astrophysics Data System (ADS)

    Gourley, J. J.; Kirstetter, P.; Hong, Y.; Hardy, J.; Flamig, Z.

    2013-12-01

    This study presents a methodology to account for uncertainty in radar-based rainfall rate estimation using NOAA/NSSL's Multi-Radar Multisensor (MRMS) products. The focus of the study in on flood forecasting, including flash floods, in ungauged catchments throughout the conterminous US. An error model is used to derive probability distributions of rainfall rates that explicitly accounts for rain typology and uncertainty in the reflectivity-to-rainfall relationships. This approach preserves the fine space/time sampling properties (2 min/1 km) of the radar and conditions probabilistic quantitative precipitation estimates (PQPE) on the rain rate and rainfall type. Uncertainty in rainfall amplitude is the primary factor that is accounted for in the PQPE development. Additional uncertainties due to rainfall structures, locations, and timing must be considered when using quantitative precipitation forecast (QPF) products as forcing to a hydrologic model. A new method will be presented that shows how QPF ensembles are used in a hydrologic modeling context to derive probabilistic flood forecast products. This method considers the forecast rainfall intensity and morphology superimposed on pre-existing hydrologic conditions to identify basin scales that are most at risk.

  20. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models

    PubMed Central

    Shimayoshi, Takao; Cha, Chae Young; Amano, Akira

    2015-01-01

    Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics. PMID:26091413

  1. Evaluation Between Existing and Improved CCF Modeling Using the NRC SPAR Models

    SciTech Connect

    James K. Knudsen

    2010-06-01

    Abstract: The NRC SPAR models currently employ the alpha factor common cause failure (CCF) methodology and model CCF for a group of redundant components as a single “rolled-up” basic event. These SPAR models will be updated to employ a more computationally intensive and accurate approach by expanding the CCF basic events for all active components to include all terms that appear in the Basic Parameter Model (BPM). A discussion is provided to detail the differences between the rolled-up common cause group (CCG) and expanded BPM adjustment concepts based on differences in core damage frequency and individual component importance measures. Lastly, a hypothetical condition is evaluated with a SPAR model to show the difference in results between the current adjustment method (rolled-up CCF events) and the newer method employing all of the expanded terms in the BPM. The event evaluation on the SPAR model employing the expanded terms will be solved using the graphical evaluation module (GEM) and the proposed method discussed in Reference 1.

  2. Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy

    ERIC Educational Resources Information Center

    Smith, Rachel; Cantrell, Kevin

    2007-01-01

    Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.

  3. Numerical Modelling of Extended Leak-Off Test with a Pre-Existing Fracture

    NASA Astrophysics Data System (ADS)

    Lavrov, A.; Larsen, I.; Bauer, A.

    2016-04-01

    Extended leak-off test (XLOT) is one of the few techniques available for stress measurements in oil and gas wells. Interpretation of the test is often difficult since the results depend on a multitude of factors, including the presence of natural or drilling-induced fractures in the near-well area. Coupled numerical modelling of XLOT has been performed to investigate the pressure behaviour during the flowback phase as well as the effect of a pre-existing fracture on the test results in a low-permeability formation. Essential features of XLOT known from field measurements are captured by the model, including the saw-tooth shape of the pressure vs injected volume curve, and the change of slope in the pressure vs time curve during flowback used by operators as an indicator of the bottomhole pressure reaching the minimum in situ stress. Simulations with a pre-existing fracture running from the borehole wall in the radial direction have revealed that the results of XLOT are quite sensitive to the orientation of the pre-existing fracture. In particular, the fracture initiation pressure and the formation breakdown pressure increase steadily with decreasing angle between the fracture and the minimum in situ stress. Our findings seem to invalidate the use of the fracture initiation pressure and the formation breakdown pressure for stress measurements or rock strength evaluation purposes.

  4. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments. PMID:27505294

  5. Local Existence of Weak Solutions to Kinetic Models of Granular Media

    NASA Astrophysics Data System (ADS)

    Agueh, Martial

    2016-08-01

    We prove in any dimension {d ≥q 1} a local in time existence of weak solutions to the Cauchy problem for the kinetic equation of granular media, partial_t f+v\\cdot nabla_x f = {div}_v[f(nabla W *_v f)] when the initial data are nonnegative, integrable and bounded functions with compact support in velocity, and the interaction potential {W} is a {C^2({{R}}^d)} radially symmetric convex function. Our proof is constructive and relies on a splitting argument in position and velocity, where the spatially homogeneous equation is interpreted as the gradient flow of a convex interaction energy with respect to the quadratic Wasserstein distance. Our result generalizes the local existence result obtained by Benedetto et al. (RAIRO Modél Math Anal Numér 31(5):615-641, 1997) on the one-dimensional model of this equation for a cubic power-law interaction potential.

  6. Inheritance of pre-existing weakness in continental breakup: 3D numerical modeling

    NASA Astrophysics Data System (ADS)

    Liao, Jie; Gerya, Taras

    2013-04-01

    The whole process of continental rifting to seafloor spreading is one of the most important plate tectonics on the earth. There are many questions remained related to this process, most of which are poorly understood, such as how continental rifting transformed into seafloor spreading? How the curved oceanic ridge developed from a single straight continental rift? How the pre-existing weakness in either crust or lithospheric mantle individually influences the continental rifting and oceanic spreading? By employing the state-of-the-art three-dimensional thermomechanical-coupled numerical code (using Eulerian-Lagrangian finite-difference method and marker-in-cell technic) (Gerya and Yuen, 2007), which can model long-term plate extension and large strains, we studied the whole process of continental rifting to seafloor spreading based on the following question: How the pre-existing lithospheric weak zone influences the continental breakup? Continental rifts do not occur randomly, but like to follow the pre-existing weakness (such as fault zones, suture zones, failed rifts, and other tectonic boundaries) in the lithosphere, for instance, the western branch of East African Rift formed in the relatively weak mobile belts along the curved western border of Tanzanian craton (Corti et al., 2007; Nyblade and Brazier, 2002), the Main Ethiopian Rift developed within the Proterozoic mobile belt which is believed to represent a continental collision zone (Keranen and Klemperer, 2008),the Baikal rift formed along the suture between Siberian craton and Sayan-Baikal folded belt (Chemenda et al., 2002). The early stage formed rift can be a template for the future rift development and continental breakup (Keranen and Klemperer, 2008). Lithospheric weakness can either reduce the crustal strength or mantle strength, and leads to the crustal or mantle necking (Dunbar and Sawyer, 1988), which plays an important role on controlling the continental breakup patterns, such as controlling the

  7. Quantitative Models of the Dose-Response and Time Course of Inhalational Anthrax in Humans

    PubMed Central

    Schell, Wiley A.; Bulmahn, Kenneth; Walton, Thomas E.; Woods, Christopher W.; Coghill, Catherine; Gallegos, Frank; Samore, Matthew H.; Adler, Frederick R.

    2013-01-01

    Anthrax poses a community health risk due to accidental or intentional aerosol release. Reliable quantitative dose-response analyses are required to estimate the magnitude and timeline of potential consequences and the effect of public health intervention strategies under specific scenarios. Analyses of available data from exposures and infections of humans and non-human primates are often contradictory. We review existing quantitative inhalational anthrax dose-response models in light of criteria we propose for a model to be useful and defensible. To satisfy these criteria, we extend an existing mechanistic competing-risks model to create a novel Exposure–Infection–Symptomatic illness–Death (EISD) model and use experimental non-human primate data and human epidemiological data to optimize parameter values. The best fit to these data leads to estimates of a dose leading to infection in 50% of susceptible humans (ID50) of 11,000 spores (95% confidence interval 7,200–17,000), ID10 of 1,700 (1,100–2,600), and ID1 of 160 (100–250). These estimates suggest that use of a threshold to human infection of 600 spores (as suggested in the literature) underestimates the infectivity of low doses, while an existing estimate of a 1% infection rate for a single spore overestimates low dose infectivity. We estimate the median time from exposure to onset of symptoms (incubation period) among untreated cases to be 9.9 days (7.7–13.1) for exposure to ID50, 11.8 days (9.5–15.0) for ID10, and 12.1 days (9.9–15.3) for ID1. Our model is the first to provide incubation period estimates that are independently consistent with data from the largest known human outbreak. This model refines previous estimates of the distribution of early onset cases after a release and provides support for the recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses. PMID:24058320

  8. From sample to signal in laser-induced breakdown spectroscopy: An experimental assessment of existing algorithms and theoretical modeling approaches

    NASA Astrophysics Data System (ADS)

    Herrera, Kathleen Kate

    In recent years, laser-induced breakdown spectroscopy (LIBS) has become an increasingly popular technique for many diverse applications. This is mainly due to its numerous attractive features including minimal to no sample preparation, minimal sample invasiveness, sample versatility, remote detection capability and simultaneous multi-elemental capability. However, most of LIBS applications are limited to semi-quantitative or relative analysis due to the difficulty in finding matrix-matched standards or a constant reference component in the system for calibration purposes. Therefore, methods which do not require the use of reference standards, hence, standard-free, are highly desired. In this research, a general LIBS system was constructed, calibrated and optimized. The corresponding instrumental function and relative spectral efficiency of the detection system were also investigated. In addition, development of a spectral acquisition method was necessary so that data in the wide spectral range from 220 to 700 nm may be obtained using a non-echelle detection system. This requires multiple acquisitions of successive spectral windows and splicing the windows together with optimum overlap using an in-house program written in Q-basic. Two existing standard-free approaches, the calibration-free LIBS (CF-LIBS) technique and the Monte Carlo simulated annealing optimization modeling algorithm for LIBS (MC-LIBS), were experimentally evaluated in this research. The CF-LIBS approach, which is based on the Boltzmann plot method, is used to directly evaluate the plasma temperature, electron number density and relative concentrations of species present in a given sample without the need for reference standards. In the second approach, the initial value problem is solved based on the model of a radiative plasma expanding into vacuum. Here, the prediction of the initial plasma conditions (i.e., temperature and elemental number densities) is achieved by a step-wise Monte Carlo

  9. Development of an Experimental Model of Diabetes Co-Existing with Metabolic Syndrome in Rats

    PubMed Central

    Suman, Rajesh Kumar; Ray Mohanty, Ipseeta; Borde, Manjusha K.; Maheshwari, Ujwala; Deshmukh, Y. A.

    2016-01-01

    Background. The incidence of metabolic syndrome co-existing with diabetes mellitus is on the rise globally. Objective. The present study was designed to develop a unique animal model that will mimic the pathological features seen in individuals with diabetes and metabolic syndrome, suitable for pharmacological screening of drugs. Materials and Methods. A combination of High-Fat Diet (HFD) and low dose of streptozotocin (STZ) at 30, 35, and 40 mg/kg was used to induce metabolic syndrome in the setting of diabetes mellitus in Wistar rats. Results. The 40 mg/kg STZ produced sustained hyperglycemia and the dose was thus selected for the study to induce diabetes mellitus. Various components of metabolic syndrome such as dyslipidemia {(increased triglyceride, total cholesterol, LDL cholesterol, and decreased HDL cholesterol)}, diabetes mellitus (blood glucose, HbA1c, serum insulin, and C-peptide), and hypertension {systolic blood pressure} were mimicked in the developed model of metabolic syndrome co-existing with diabetes mellitus. In addition to significant cardiac injury, atherogenic index, inflammation (hs-CRP), decline in hepatic and renal function were observed in the HF-DC group when compared to NC group rats. The histopathological assessment confirmed presence of edema, necrosis, and inflammation in heart, pancreas, liver, and kidney of HF-DC group as compared to NC. Conclusion. The present study has developed a unique rodent model of metabolic syndrome, with diabetes as an essential component. PMID:26880906

  10. Existence and qualitative properties of travelling waves for an epidemiological model with mutations

    NASA Astrophysics Data System (ADS)

    Griette, Quentin; Raoul, Gaël

    2016-05-01

    In this article, we are interested in a non-monotonic system of logistic reaction-diffusion equations. This system of equations models an epidemic where two types of pathogens are competing, and a mutation can change one type into the other with a certain rate. We show the existence of travelling waves with minimal speed, which are usually non-monotonic. Then we provide a description of the shape of those constructed travelling waves, and relate them to some Fisher-KPP fronts with non-minimal speed.

  11. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation

  12. Existence and uniqueness of stabilized propagating wave segments in wave front interaction model

    NASA Astrophysics Data System (ADS)

    Guo, Jong-Shenq; Ninomiya, Hirokazu; Tsai, Je-Chiang

    2010-02-01

    Recent experimental studies of photosensitive Belousov-Zhabotinskii reaction have revealed the existence of propagating wave segments. The propagating wave segments are unstable, but can be stabilized by using a feedback control to continually adjust the excitability of the medium. Experimental studies also indicate that the locus of the size of a stabilized wave segment as a function of the excitability of the medium gives the excitability boundary for the existence of 2D wave patterns with free ends in excitable media. To study the properties of this boundary curve, we use the wave front interaction model proposed by Zykov and Showalter. This is equivalent to study a first order system of three ordinary differential equations which includes a singular nonlinearity. Using two different reduced first order systems of two ordinary differential equations, we first show the existence of wave segments for any given propagating velocity. Then the wave profiles can be classified into two types, namely, convex and non-convex types. More precisely, when the normalized propagating velocity is small, we show that the wave profile is of convex type, while the wave profile is of non-convex type when the normalized velocity is close to 1.

  13. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    PubMed

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted. PMID:23702385

  14. Existence and exponential stability of positive almost periodic solution for Nicholson's blowflies models on time scales.

    PubMed

    Li, Yongkun; Li, Bing

    2016-01-01

    In this paper, we first give a new definition of almost periodic time scales, two new definitions of almost periodic functions on time scales and investigate some basic properties of them. Then, as an application, by using a fixed point theorem in Banach space and the time scale calculus theory, we obtain some sufficient conditions for the existence and exponential stability of positive almost periodic solutions for a class of Nicholson's blowflies models on time scales. Finally, we present an illustrative example to show the effectiveness of obtained results. Our results show that under a simple condition the continuous-time Nicholson's blowflies model and its discrete-time analogue have the same dynamical behaviors. PMID:27468397

  15. Existence of the critical endpoint in the vector meson extended linear sigma model

    NASA Astrophysics Data System (ADS)

    Kovács, P.; Szép, Zs.; Wolf, Gy.

    2016-06-01

    The chiral phase transition of the strongly interacting matter is investigated at nonzero temperature and baryon chemical potential (μB) within an extended (2 +1 ) flavor Polyakov constituent quark-meson model that incorporates the effect of the vector and axial vector mesons. The effect of the fermionic vacuum and thermal fluctuations computed from the grand potential of the model is taken into account in the curvature masses of the scalar and pseudoscalar mesons. The parameters of the model are determined by comparing masses and tree-level decay widths with experimental values in a χ2-minimization procedure that selects between various possible assignments of scalar nonet states to physical particles. We examine the restoration of the chiral symmetry by monitoring the temperature evolution of condensates and the chiral partners' masses and of the mixing angles for the pseudoscalar η -η' and the corresponding scalar complex. We calculate the pressure and various thermodynamical observables derived from it and compare them to the continuum extrapolated lattice results of the Wuppertal-Budapest collaboration. We study the T -μB phase diagram of the model and find that a critical endpoint exists for parameters of the model, which give acceptable values of χ2.

  16. Model based prediction of the existence of the spontaneous cochlear microphonic

    NASA Astrophysics Data System (ADS)

    Ayat, Mohammad; Teal, Paul D.

    2015-12-01

    In the mammalian cochlea, self-sustaining oscillation of the basilar membrane in the cochlea can cause vibration of the ear drum, and produce spontaneous narrow-band air pressure fluctuations in the ear canal. These spontaneous fluctuations are known as spontaneous otoacoustic emissions. Small perturbations in feedback gain of the cochlear amplifier have been proposed to be the generation source of self-sustaining oscillations of the basilar membrane. We hypothesise that the self-sustaining oscillation resulting from small perturbations in feedback gain produce spontaneous potentials in the cochlea. We demonstrate that according to the results of the model, a measurable spontaneous cochlear microphonic must exist in the human cochlea. The existence of this signal has not yet been reported. However, this spontaneous electrical signal could play an important role in auditory research. Successful or unsuccessful recording of this signal will indicate whether previous hypotheses about the generation source of spontaneous otoacoustic emissions are valid or should be amended. In addition according to the proposed model spontaneous cochlear microphonic is basically an electrical analogue of spontaneous otoacoustic emissions. In certain experiments, spontaneous cochlear microphonic may be more easily detected near its generation site with proper electrical instrumentation than is spontaneous otoacoustic emission.

  17. What Are We Doing When We Translate from Quantitative Models?

    ERIC Educational Resources Information Center

    Critchfield, Thomas S.; Reed, Derek D.

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may…

  18. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  19. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models

    PubMed Central

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-01-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality. However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management. We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee – varroa mite – virus interactions. We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes. Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions. PMID:24223431

  20. Endoscopic skull base training using 3D printed models with pre-existing pathology.

    PubMed

    Narayanan, Vairavan; Narayanan, Prepageran; Rajagopalan, Raman; Karuppiah, Ravindran; Rahman, Zainal Ariff Abdul; Wormald, Peter-John; Van Hasselt, Charles Andrew; Waran, Vicknes

    2015-03-01

    Endoscopic base of skull surgery has been growing in acceptance in the recent past due to improvements in visualisation and micro instrumentation as well as the surgical maturing of early endoscopic skull base practitioners. Unfortunately, these demanding procedures have a steep learning curve. A physical simulation that is able to reproduce the complex anatomy of the anterior skull base provides very useful means of learning the necessary skills in a safe and effective environment. This paper aims to assess the ease of learning endoscopic skull base exposure and drilling techniques using an anatomically accurate physical model with a pre-existing pathology (i.e., basilar invagination) created from actual patient data. Five models of a patient with platy-basia and basilar invagination were created from the original MRI and CT imaging data of a patient. The models were used as part of a training workshop for ENT surgeons with varying degrees of experience in endoscopic base of skull surgery, from trainees to experienced consultants. The surgeons were given a list of key steps to achieve in exposing and drilling the skull base using the simulation model. They were then asked to list the level of difficulty of learning these steps using the model. The participants found the models suitable for learning registration, navigation and skull base drilling techniques. All participants also found the deep structures to be accurately represented spatially as confirmed by the navigation system. These models allow structured simulation to be conducted in a workshop environment where surgeons and trainees can practice to perform complex procedures in a controlled fashion under the supervision of experts. PMID:25294050

  1. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models.

    PubMed

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-08-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality.However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management.We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee - varroa mite - virus interactions.We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes.Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions. PMID:24223431

  2. Existence and stability of limit cycles in a macroscopic neuronal population model

    NASA Astrophysics Data System (ADS)

    Rodrigues, Serafim; Gonçalves, Jorge; Terry, John R.

    2007-09-01

    We present rigorous results concerning the existence and stability of limit cycles in a macroscopic model of neuronal activity. The specific model we consider is developed from the Ki set methodology, popularized by Walter Freeman. In particular we focus on a specific reduction of the KII sets, denoted RKII sets. We analyse the unfolding of supercritical Hopf bifurcations via consideration of the normal forms and centre manifold reductions. Subsequently we analyse the global stability of limit cycles on a region of parameter space and this is achieved by applying a new methodology termed Global Analysis of Piecewise Linear Systems. The analysis presented may also be used to consider coupled systems of this type. A number of macroscopic mean-field approaches to modelling human EEG may be considered as coupled RKII networks. Hence developing a theoretical understanding of the onset of oscillations in models of this type has important implications in clinical neuroscience, as limit cycle oscillations have been demonstrated to be critical in the onset of certain types of epilepsy.

  3. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  4. Using Existing Coastal Models To Address Ocean Acidification Modeling Needs: An Inside Look at Several East and Gulf Coast Regions

    NASA Astrophysics Data System (ADS)

    Jewett, E.

    2013-12-01

    Ecosystem forecast models have been in development for many US coastal regions for decades in an effort to understand how certain drivers, such as nutrients, freshwater and sediments, affect coastal water quality. These models have been used to inform coastal management interventions such as imposition of total maximum daily load allowances for nutrients or sediments to control hypoxia, harmful algal blooms and/or water clarity. Given the overlap of coastal acidification with hypoxia, it seems plausible that the geochemical models built to explain hypoxia and/or HABs might also be used, with additional terms, to understand how atmospheric CO2 is interacting with local biogeochemical processes to affect coastal waters. Examples of existing biogeochemical models from Galveston, the northern Gulf of Mexico, Tampa Bay, West Florida Shelf, Pamlico Sound, Chesapeake Bay, and Narragansett Bay will be presented and explored for suitability for ocean acidification modeling purposes.

  5. Existence, numerical convergence and evolutionary relaxation for a rate-independent phase-transformation model.

    PubMed

    Heinz, Sebastian; Mielke, Alexander

    2016-04-28

    We revisit the model for a two-well phase transformation in a linearly elastic body that was introduced and studied in Mielke et al. (2002 Arch. Ration. Mech. Anal. 162: , 137-177). This energetic rate-independent system is posed in terms of the elastic displacement and an internal variable that gives the phase portion of the second phase. We use a new approach based on mutual recovery sequences, which are adjusted to a suitable energy increment plus the associated dissipated energy and, thus, enable us to pass to the limit in the construction of energetic solutions. We give three distinct constructions of mutual recovery sequences which allow us (i) to generalize the existence result in Mielke et al. (2002), (ii) to establish the convergence of suitable numerical approximations via space-time discretization and (iii) to perform the evolutionary relaxation from the pure-state model to the relaxed-mixture model. All these results rely on weak converge and involve the H-measure as an essential tool. PMID:27002066

  6. Existence of solutions to the stommel-charney model of the gulf stream

    SciTech Connect

    Barcilon, V. ); Constantin, P. ); Titi, E.S. )

    1988-11-01

    This paper discusses the existence of weak solutions to the equations as a model of the Gulf Stream. The method of artificial viscosity is also discussed. Key words: Navier-Stokes equation, artificial viscosity, ocean circulation, DOE. The authors examine the mathematical properties of an equation arising in the theory of ocean circulation. In order to understand the role of this problem in oceanography, a brief review of the subject is given. The first successful attempt to provide a mathematical description of the mid-latitude ocean currents was made by other investigators. It was shown conclusively that a Gulf Stream-like intensification on the western side of an ocean basin could be explained by the so-called ..beta..-effect. This is the geophysical terminology for the latitudinal variation of the normal component of the earth's rotation. Aside from this variable Coriolis force, the other forces which entered into Stommel's model were those due to the pressure gradient, the surface winds, and friction. For the sake of simplicity, this last force was taken to be proportional to the velocity fields. All the effects of density stratification were neglected by making the assumption that the ocean was homogeneous. Finally, by working with vertical averages, Stommel essentially treated the ocean circulation as a two-dimensional horizontal motion. Somewhat surprisingly, Stommel's ad hoc, linear model was shown later to provide an accurate description of an actual experimental setup.

  7. Frequency domain modeling and dynamic characteristics evaluation of existing wind turbine systems

    NASA Astrophysics Data System (ADS)

    Chiang, Chih-Hung; Yu, Chih-Peng

    2016-04-01

    It is quite well accepted that frequency domain procedures are suitable for the design and dynamic analysis of wind turbine structures, especially for floating offshore wind turbines, since random wind loads and wave induced motions are most likely simulated in the frequency domain. This paper presents specific applications of an effective frequency domain scheme to the linear analysis of wind turbine structures in which a 1-D spectral element was developed based on the axially-loaded member. The solution schemes are summarized for the spectral analyses of the tower, the blades, and the combined system with selected frequency-dependent coupling effect from foundation-structure interactions. Numerical examples demonstrate that the modal frequencies obtained using spectral-element models are in good agreement with those found in the literature. A 5-element mono-pile model results in less than 0.3% deviation from an existing 160-element model. It is preliminarily concluded that the proposed scheme is relatively efficient in performing quick verification for test data obtained from the on-site vibration measurement using the microwave interferometer.

  8. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model.

    PubMed

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362

  9. Dynamics of childhood growth and obesity development and validation of a quantitative mathematical model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Clinicians and policy makers need the ability to predict quantitatively how childhood bodyweight will respond to obesity interventions. We developed and validated a mathematical model of childhood energy balance that accounts for healthy growth and development of obesity, and that makes quantitative...

  10. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    NASA Astrophysics Data System (ADS)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".

  11. Future observational and modelling needs identified on the basis of the existing shelf data

    NASA Astrophysics Data System (ADS)

    Berlamont, J.; Radach, G.; Becker, G.; Colijn, F.; Gekeler, J.; Laane, R. W. P. M.; Monbaliu, J.; Prandle, D.; Sündermann, J.; van Raaphorst, W.; Yu, C. S.

    1996-09-01

    NOWESP has compiled a vast quantity of existing data from the north-west European shelf. Such a focused task is without precedence. It is now highly recommended that one, or a few national and international data centres or agencies should be chosen and properly supported by the E. U., where all available observational data, incl. the NOWESP data, are collected, stored, regularly updated by the providers of the data, and made available to the researchers. International agreement must be reached on the quality control procedures and quality standards for data to be stored in these data bases. Proper arrangements should be made to preserve the economic value of the data for their “owners” without compromising use of the data by researchers or duplicating data collecting efforts. The Continental Shelf data needed are concentration fields of temperature, salinity, nutrients, suspended matter and chlorophyll, which can be called “climatological” fields. For this purpose at least one monthly survey on the whole European shelf is needed at least during five years, with a proper spatial resolution, e. g. 1‡ by 1‡, and at least in those areas where climatological data are now totally lacking. From the modelling point of view an alternative would be the availability of data from sufficiently representative fixed stations on the shelf, with weekly sampling for several years. It should be realized that there are hardly any data available on the shelf boundaries. Therefore, one should consider a European effort to set up a limited network of stations, especially at the shelf edge, where a limited, selected set of parameters is measured on a long-term basis (time series) for use in modelling and for interpreting long-term natural changes in the marine environment and changes due to human interference (eutrophication, pollutants, climatic changes, biodiversity changes). The E. U. could foster coordination of nationally organized measuring campaigns in Europe

  12. Impact assessment of abiotic resources in LCA: quantitative comparison of selected characterization models.

    PubMed

    Rørbech, Jakob T; Vadenbo, Carl; Hellweg, Stefanie; Astrup, Thomas F

    2014-10-01

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247 individual market inventory data sets covering a wide range of societal activities (ecoinvent database v3.0). Log-linear regression analysis was carried out for all pairwise combinations of the 11 methods for identification of correlations in CFs (resources) and total impacts (inventory data sets) between methods. Significant differences in resource coverage were observed (9-73 resources) revealing a trade-off between resource coverage and model complexity. High correlation in CFs between methods did not necessarily manifest in high correlation in total impacts. This indicates that also resource coverage may be critical for impact assessment results. Although no consistent correlations between methods applying similar assessment models could be observed, all methods showed relatively high correlation regarding the assessment of energy resources. Finally, we classify the existing methods into three groups, according to method focus and modeling approach, to aid method selection within LCA. PMID:25208267

  13. Physically based estimation of soil water retention from textural data: General framework, new models, and streamlined existing models

    USGS Publications Warehouse

    Nimmo, J.R.; Herkelrath, W.N.; Laguna, Luna A.M.

    2007-01-01

    Numerous models are in widespread use for the estimation of soil water retention from more easily measured textural data. Improved models are needed for better prediction and wider applicability. We developed a basic framework from which new and existing models can be derived to facilitate improvements. Starting from the assumption that every particle has a characteristic dimension R associated uniquely with a matric pressure ?? and that the form of the ??-R relation is the defining characteristic of each model, this framework leads to particular models by specification of geometric relationships between pores and particles. Typical assumptions are that particles are spheres, pores are cylinders with volume equal to the associated particle volume times the void ratio, and that the capillary inverse proportionality between radius and matric pressure is valid. Examples include fixed-pore-shape and fixed-pore-length models. We also developed alternative versions of the model of Arya and Paris that eliminate its interval-size dependence and other problems. The alternative models are calculable by direct application of algebraic formulas rather than manipulation of data tables and intermediate results, and they easily combine with other models (e.g., incorporating structural effects) that are formulated on a continuous basis. Additionally, we developed a family of models based on the same pore geometry as the widely used unsaturated hydraulic conductivity model of Mualem. Predictions of measurements for different suitable media show that some of the models provide consistently good results and can be chosen based on ease of calculations and other factors. ?? Soil Science Society of America. All rights reserved.

  14. Photon-tissue interaction model for quantitative assessment of biological tissues

    NASA Astrophysics Data System (ADS)

    Lee, Seung Yup; Lloyd, William R.; Wilson, Robert H.; Chandra, Malavika; McKenna, Barbara; Simeone, Diane; Scheiman, James; Mycek, Mary-Ann

    2014-02-01

    In this study, we describe a direct fit photon-tissue interaction model to quantitatively analyze reflectance spectra of biological tissue samples. The model rapidly extracts biologically-relevant parameters associated with tissue optical scattering and absorption. This model was employed to analyze reflectance spectra acquired from freshly excised human pancreatic pre-cancerous tissues (intraductal papillary mucinous neoplasm (IPMN), a common precursor lesion to pancreatic cancer). Compared to previously reported models, the direct fit model improved fit accuracy and speed. Thus, these results suggest that such models could serve as real-time, quantitative tools to characterize biological tissues assessed with reflectance spectroscopy.

  15. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  16. A QUANTITATIVE PEDOLOGY APPROACH TO CONTINUOUS SOIL LANDSCAPE MODELS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Continuous representations of soil profiles and landscapes are needed to provide input into process based models and to move beyond the categorical paradigm of horizons and map-units. Continuous models of soil landscapes should be driven by the factors and processes of the soil genetic model. Parame...

  17. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  18. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  19. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  20. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  1. Quantitative assessment of meteorological and tropospheric Zenith Hydrostatic Delay models

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Guo, Jiming; Chen, Ming; Shi, Junbo; Zhou, Lv

    2016-09-01

    Tropospheric delay has always been an important issue in GNSS/DORIS/VLBI/InSAR processing. Most commonly used empirical models for the determination of tropospheric Zenith Hydrostatic Delay (ZHD), including three meteorological models and two empirical ZHD models, are carefully analyzed in this paper. Meteorological models refer to UNB3m, GPT2 and GPT2w, while ZHD models include Hopfield and Saastamoinen. By reference to in-situ meteorological measurements and ray-traced ZHD values of 91 globally distributed radiosonde sites, over a four-years period from 2010 to 2013, it is found that there is strong correlation between errors of model-derived values and latitudes. Specifically, the Saastamoinen model shows a systematic error of about -3 mm. Therefore a modified Saastamoinen model is developed based on the "best average" refractivity constant, and is validated by radiosonde data. Among different models, the GPT2w and the modified Saastamoinen model perform the best. ZHD values derived from their combination have a mean bias of -0.1 mm and a mean RMS of 13.9 mm. Limitations of the present models are discussed and suggestions for further improvements are given.

  2. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    PubMed

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. PMID:22197801

  3. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level. PMID:15376934

  4. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  5. Thermodynamic Modeling of a Solid Oxide Fuel Cell to Couple with an Existing Gas Turbine Engine Model

    NASA Technical Reports Server (NTRS)

    Brinson, Thomas E.; Kopasakis, George

    2004-01-01

    The Controls and Dynamics Technology Branch at NASA Glenn Research Center are interested in combining a solid oxide fuel cell (SOFC) to operate in conjunction with a gas turbine engine. A detailed engine model currently exists in the Matlab/Simulink environment. The idea is to incorporate a SOFC model within the turbine engine simulation and observe the hybrid system's performance. The fuel cell will be heated to its appropriate operating condition by the engine s combustor. Once the fuel cell is operating at its steady-state temperature, the gas burner will back down slowly until the engine is fully operating on the hot gases exhausted from the SOFC. The SOFC code is based on a steady-state model developed by The U.S. Department of Energy (DOE). In its current form, the DOE SOFC model exists in Microsoft Excel and uses Visual Basics to create an I-V (current-voltage) profile. For the project's application, the main issue with this model is that the gas path flow and fuel flow temperatures are used as input parameters instead of outputs. The objective is to create a SOFC model based on the DOE model that inputs the fuel cells flow rates and outputs temperature of the flow streams; therefore, creating a temperature profile as a function of fuel flow rate. This will be done by applying the First Law of Thermodynamics for a flow system to the fuel cell. Validation of this model will be done in two procedures. First, for a given flow rate the exit stream temperature will be calculated and compared to DOE SOFC temperature as a point comparison. Next, an I-V curve and temperature curve will be generated where the I-V curve will be compared with the DOE SOFC I-V curve. Matching I-V curves will suggest validation of the temperature curve because voltage is a function of temperature. Once the temperature profile is created and validated, the model will then be placed into the turbine engine simulation for system analysis.

  6. A Quantitative Causal Model Theory of Conditional Reasoning

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  7. On the Non-Existence of Optimal Solutions and the Occurrence of "Degeneracy" in the CANDECOMP/PARAFAC Model

    ERIC Educational Resources Information Center

    Krijnen, Wim P.; Dijkstra, Theo K.; Stegeman, Alwin

    2008-01-01

    The CANDECOMP/PARAFAC (CP) model decomposes a three-way array into a prespecified number of "R" factors and a residual array by minimizing the sum of squares of the latter. It is well known that an optimal solution for CP need not exist. We show that if an optimal CP solution does not exist, then any sequence of CP factors monotonically decreasing…

  8. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. PMID:25951756

  9. Detection of cardiomyopathy in an animal model using quantitative autoradiography

    SciTech Connect

    Kubota, K.; Som, P.; Oster, Z.H.; Brill, A.B.; Goodman, M.M.; Knapp, F.F. Jr.; Atkins, H.L.; Sole, M.J.

    1988-10-01

    A fatty acid analog (15-p-iodophenyl)-3,3 dimethyl-pentadecanoic acid (DMIPP) was studied in cardiomyopathic (CM) and normal age-matched Syrian hamsters. Dual tracer quantitative wholebody autoradiography (QARG) with DMIPP and 2-(/sup 14/C(U))-2-deoxy-2-fluoro-D-glucose (FDG) or with FDG and /sup 201/Tl enabled comparison of the uptake of a fatty acid and a glucose analog with the blood flow. These comparisons were carried out at the onset and mid-stage of the disease before congestive failure developed. Groups of CM and normal animals were treated with verapamil from the age of 26 days, before the onset of the disease for 41 days. In CM hearts, areas of decreased DMIPP uptake were seen. These areas were much larger than the decrease in uptake of FDG or /sup 201/Tl. In early CM only minimal changes in FDG or /sup 201/Tl uptake were observed as compared to controls. Treatment of CM-prone animals with verapamil prevented any changes in DMIPP, FDG, or /sup 201/Tl uptake. DMIPP seems to be a more sensitive indicator of early cardiomyopathic changes as compared to /sup 201/Tl or FDG. The trial of DMIPP and SPECT in the diagnosis of human disease, as well as for monitoring the effects of drugs which may prevent it seems to be warranted.

  10. Quantitative Methods for Comparing Different Polyline Stream Network Models

    SciTech Connect

    Danny L. Anderson; Daniel P. Ames; Ping Yang

    2014-04-01

    Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.

  11. Global existence of the three-dimensional viscous quantum magnetohydrodynamic model

    SciTech Connect

    Yang, Jianwei; Ju, Qiangchang

    2014-08-15

    The global-in-time existence of weak solutions to the viscous quantum Magnetohydrodynamic equations in a three-dimensional torus with large data is proved. The global existence of weak solutions to the viscous quantum Magnetohydrodynamic equations is shown by using the Faedo-Galerkin method and weak compactness techniques.

  12. Toward a class-independent quantitative structure--activity relationship model for uncouplers of oxidative phosphorylation.

    PubMed

    Spycher, Simon; Smejtek, Pavel; Netzeva, Tatiana I; Escher, Beate I

    2008-04-01

    A mechanistically based quantitative structure-activity relationship (QSAR) for the uncoupling activity of weak organic acids has been derived. The analysis of earlier experimental studies suggested that the limiting step in the uncoupling process is the rate with which anions can cross the membrane and that this rate is determined by the height of the energy barrier encountered in the hydrophobic membrane core. We use this mechanistic understanding to develop a predictive model for uncoupling. The translocation rate constants of anions correlate well with the free energy difference between the energy well and the energy barrier, Delta G well-barrier,A (-) , in the membrane calculated by a novel approach to describe internal partitioning in the membrane. An existing data set of 21 phenols measured in an in vitro test system specific for uncouplers was extended by 14 highly diverse compounds. A simple regression model based on the experimental membrane-water partition coefficient and Delta G well-barrier,A (-) showed good predictive power and had meaningful regression coefficients. To establish uncoupler QSARs independent of chemical class, it is necessary to calculate the descriptors for the charged species, as the analogous descriptors of the neutral species showed almost no correlation with the translocation rate constants of anions. The substitution of experimental with calculated partition coefficients resulted in a decrease of the model fit. A particular strength of the current model is the accurate calculation of excess toxicity, which makes it a suitable tool for database screening. The applicability domain, limitations of the model, and ideas for future research are critically discussed. PMID:18358007

  13. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    PubMed Central

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  14. Efficient Recycled Algorithms for Quantitative Trait Models on Phylogenies

    PubMed Central

    Hiscott, Gordon; Fox, Colin; Parry, Matthew; Bryant, David

    2016-01-01

    We present an efficient and flexible method for computing likelihoods for phenotypic traits on a phylogeny. The method does not resort to Monte Carlo computation but instead blends Felsenstein’s discrete character pruning algorithm with methods for numerical quadrature. It is not limited to Gaussian models and adapts readily to model uncertainty in the observed trait values. We demonstrate the framework by developing efficient algorithms for likelihood calculation and ancestral state reconstruction under Wright’s threshold model, applying our methods to a data set of trait data for extrafloral nectaries across a phylogeny of 839 Fabales species. PMID:27056412

  15. A quantitative model of plasma in Neptune's magnetosphere

    NASA Astrophysics Data System (ADS)

    Richardson, J. D.

    1993-07-01

    A model encompassing plasma transport and energy processes is applied to Neptune's magnetosphere. Starting with profiles of the neutral densities and the electron temperature, the model calculates the plasma density and ion temperature profiles. Good agreement between model results and observations is obtained for a neutral source of 5 x 10 exp 25/s if the diffusion coefficient is 10 exp -8 L3R(N)/2s, plasma is lost at a rate 1/3 that of the strong diffusion rate, and plasma subcorotates in the region outside Triton.

  16. Efficient Recycled Algorithms for Quantitative Trait Models on Phylogenies.

    PubMed

    Hiscott, Gordon; Fox, Colin; Parry, Matthew; Bryant, David

    2016-01-01

    We present an efficient and flexible method for computing likelihoods for phenotypic traits on a phylogeny. The method does not resort to Monte Carlo computation but instead blends Felsenstein's discrete character pruning algorithm with methods for numerical quadrature. It is not limited to Gaussian models and adapts readily to model uncertainty in the observed trait values. We demonstrate the framework by developing efficient algorithms for likelihood calculation and ancestral state reconstruction under Wright's threshold model, applying our methods to a data set of trait data for extrafloral nectaries across a phylogeny of 839 Fabales species. PMID:27056412

  17. A Quantitative Model of Honey Bee Colony Population Dynamics

    PubMed Central

    Khoury, David S.; Myerscough, Mary R.; Barron, Andrew B.

    2011-01-01

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem. PMID:21533156

  18. Quantitative comparisons of numerical models of brittle deformation

    NASA Astrophysics Data System (ADS)

    Buiter, S.

    2009-04-01

    Numerical modelling of brittle deformation in the uppermost crust can be challenging owing to the requirement of an accurate pressure calculation, the ability to achieve post-yield deformation and localisation, and the choice of rheology (plasticity law). One way to approach these issues is to conduct model comparisons that can evaluate the effects of different implementations of brittle behaviour in crustal deformation models. We present a comparison of three brittle shortening experiments for fourteen different numerical codes, which use finite element, finite difference, boundary element and distinct element techniques. Our aim is to constrain and quantify the variability among models in order to improve our understanding of causes leading to differences between model results. Our first experiment of translation of a stable sand-like wedge serves as a reference that allows for testing against analytical solutions (e.g., taper angle, root-mean-square velocity and gravitational rate of work). The next two experiments investigate an unstable wedge in a sandbox-like setup which deforms by inward translation of a mobile wall. All models accommodate shortening by in-sequence formation of forward shear zones. We analyse the location, dip angle and spacing of thrusts in detail as previous comparisons have shown that these can be highly variable in numerical and analogue models of crustal shortening and extension. We find that an accurate implementation of boundary friction is important for our models. Our results are encouraging in the overall agreement in their dynamic evolution, but show at the same time the effort that is needed to understand shear zone evolution. GeoMod2008 Team: Markus Albertz, Michele Cooke, Susan Ellis, Taras Gerya, Luke Hodkinson, Kristin Hughes, Katrin Huhn, Boris Kaus, Walter Landry, Bertrand Maillot, Christophe Pascal, Anton Popov, Guido Schreurs, Christopher Beaumont, Tony Crook, Mario Del Castello and Yves Leroy

  19. Quantitative comparisons of numerical models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne

    2010-05-01

    Numerical and laboratory models are often used to investigate the evolution of deformation processes at various scales in crust and lithosphere. In both approaches, the freedom in choice of simulation method, materials and their properties, and deformation laws could affect model outcomes. To assess the role of modelling method and to quantify the variability among models, we have performed a comparison of laboratory and numerical experiments. Here, we present results of 11 numerical codes, which use finite element, finite difference and distinct element techniques. We present three experiments that describe shortening of a sand-like, brittle wedge. The material properties of the numerical ‘sand', the model set-up and the boundary conditions are strictly prescribed and follow the analogue setup as closely as possible. Our first experiment translates a non-accreting wedge with a stable surface slope of 20 degrees. In agreement with critical wedge theory, all models maintain the same surface slope and do not deform. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge in a sandbox-like setup, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. We show that we successfully simulate sandbox-style brittle behaviour using different numerical modelling techniques and that we obtain the same styles of deformation behaviour in numerical and laboratory experiments at similar levels of variability. The GeoMod2008 Numerical Team: Markus Albertz, Michelle Cooke, Tony Crook, David Egholm, Susan Ellis, Taras Gerya, Luke Hodkinson, Boris Kaus, Walter Landry, Bertrand Maillot, Yury Mishin

  20. Quantitative structure-(chromatographic) retention relationship models for dissociating compounds.

    PubMed

    Kubik, Łukasz; Wiczling, Paweł

    2016-08-01

    The aim of this work was to develop mathematical models relating the hydrophobicity and dissociation constant of an analyte with its structure, which would be useful in predicting analyte retention times in reversed-phase liquid chromatography. For that purpose a large and diverse group of 115 drugs was used to build three QSRR models combining retention-related parameters (logkw-chromatographic measure of hydrophobicity, S-slope factor from Snyder-Soczewinski equation, and pKa) with structural descriptors calculated by means of molecular modeling for both dissociated and nondissociated forms of analytes. Lasso, Stepwise and PLS regressions were used to build statistical models. Moreover a simple QSRR equations based on lipophilicity and dissociation constant parameters calculated in the ACD/Labs software were proposed and compared with quantum chemistry-based QSRR equations. The obtained relationships were further used to predict chromatographic retention times. The predictive performances of the obtained models were assessed using 10-fold cross-validation and external validation. The QSRR equations developed were simple and were characterized by satisfactory predictive performance. Application of quantum chemistry-based and ACD-based descriptors leads to similar accuracy of retention times' prediction. PMID:26960942

  1. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology

    PubMed Central

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian

    2012-01-01

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼ 10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 106, of normal HSCs. Radiobiologic estimates favor values > 106 for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  2. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology.

    PubMed

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian; Sachs, Rainer K

    2012-05-10

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 10(6), of normal HSCs. Radiobiologic estimates favor values>10(6) for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  3. Magnetospheric mapping with a quantitative geomagnetic field model

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Mead, G. D.

    1975-01-01

    Mapping the magnetosphere on a dipole geomagnetic field model by projecting field and particle observations onto the model is described. High-latitude field lines are traced between the earth's surface and their intersection with either the equatorial plane or a cross section of the geomagnetic tail, and data from low-altitude orbiting satellites are projected along field lines to the outer magnetosphere. This procedure is analyzed, and the resultant mappings are illustrated. Extension of field lines into the geomagnetic tail and low-altitude determination of the polar cap and cusp are presented. It is noted that while there is good agreement among the various data, more particle measurements are necessary to clear up statistical uncertainties and to facilitate comparison of statistical models.

  4. Analysis of protein complexes through model-based biclustering of label-free quantitative AP-MS data

    PubMed Central

    Choi, Hyungwon; Kim, Sinae; Gingras, Anne-Claude; Nesvizhskii, Alexey I

    2010-01-01

    Affinity purification followed by mass spectrometry (AP-MS) has become a common approach for identifying protein–protein interactions (PPIs) and complexes. However, data analysis and visualization often rely on generic approaches that do not take advantage of the quantitative nature of AP-MS. We present a novel computational method, nested clustering, for biclustering of label-free quantitative AP-MS data. Our approach forms bait clusters based on the similarity of quantitative interaction profiles and identifies submatrices of prey proteins showing consistent quantitative association within bait clusters. In doing so, nested clustering effectively addresses the problem of overrepresentation of interactions involving baits proteins as compared with proteins only identified as preys. The method does not require specification of the number of bait clusters, which is an advantage against existing model-based clustering methods. We illustrate the performance of the algorithm using two published intermediate scale human PPI data sets, which are representative of the AP-MS data generated from mammalian cells. We also discuss general challenges of analyzing and interpreting clustering results in the context of AP-MS data. PMID:20571534

  5. Analysis of protein complexes through model-based biclustering of label-free quantitative AP-MS data.

    PubMed

    Choi, Hyungwon; Kim, Sinae; Gingras, Anne-Claude; Nesvizhskii, Alexey I

    2010-06-22

    Affinity purification followed by mass spectrometry (AP-MS) has become a common approach for identifying protein-protein interactions (PPIs) and complexes. However, data analysis and visualization often rely on generic approaches that do not take advantage of the quantitative nature of AP-MS. We present a novel computational method, nested clustering, for biclustering of label-free quantitative AP-MS data. Our approach forms bait clusters based on the similarity of quantitative interaction profiles and identifies submatrices of prey proteins showing consistent quantitative association within bait clusters. In doing so, nested clustering effectively addresses the problem of overrepresentation of interactions involving baits proteins as compared with proteins only identified as preys. The method does not require specification of the number of bait clusters, which is an advantage against existing model-based clustering methods. We illustrate the performance of the algorithm using two published intermediate scale human PPI data sets, which are representative of the AP-MS data generated from mammalian cells. We also discuss general challenges of analyzing and interpreting clustering results in the context of AP-MS data. PMID:20571534

  6. Quantitative experimental modelling of fragmentation during explosive volcanism

    NASA Astrophysics Data System (ADS)

    Thordén Haug, Ø.; Galland, O.; Gisler, G.

    2012-04-01

    Phreatomagmatic eruptions results from the violent interaction between magma and an external source of water, such as ground water or a lake. This interaction causes fragmentation of the magma and/or the host rock, resulting in coarse-grained (lapilli) to very fine-grained (ash) material. The products of phreatomagmatic explosions are classically described by their fragment size distribution, which commonly follows power laws of exponent D. Such descriptive approach, however, considers the final products only and do not provide information on the dynamics of fragmentation. The aim of this contribution is thus to address the following fundamental questions. What are the physics that govern fragmentation processes? How fragmentation occurs through time? What are the mechanisms that produce power law fragment size distributions? And what are the scaling laws that control the exponent D? To address these questions, we performed a quantitative experimental study. The setup consists of a Hele-Shaw cell filled with a layer of cohesive silica flour, at the base of which a pulse of pressurized air is injected, leading to fragmentation of the layer of flour. The fragmentation process is monitored through time using a high-speed camera. By varying systematically the air pressure (P) and the thickness of the flour layer (h) we observed two morphologies of fragmentation: "lift off" where the silica flour above the injection inlet is ejected upwards, and "channeling" where the air pierces through the layer along sub-vertical conduit. By building a phase diagram, we show that the morphology is controlled by P/dgh, where d is the density of the flour and g is the gravitational acceleration. To quantify the fragmentation process, we developed a Matlab image analysis program, which calculates the number and sizes of the fragments, and so the fragment size distribution, during the experiments. The fragment size distributions are in general described by power law distributions of

  7. Quantitative modeling and analysis in environmental studies. Technical report

    SciTech Connect

    Gaver, D.P.

    1994-10-01

    This paper reviews some of the many mathematical modeling and statistical data analysis problems that arise in environmental studies. It makes no claim to be comprehensive nor truly up-to-date. It will appear as a chapter in a book on ecotoxicology to be published by CRC Press, probably in 1995. Workshops leading to the book creation were sponsored by The Conte Foundation.

  8. Unified quantitative model of AMPA receptor trafficking at synapses

    PubMed Central

    Czöndör, Katalin; Mondin, Magali; Garcia, Mikael; Heine, Martin; Frischknecht, Renato; Choquet, Daniel; Sibarita, Jean-Baptiste; Thoumine, Olivier R.

    2012-01-01

    Trafficking of AMPA receptors (AMPARs) plays a key role in synaptic transmission. However, a general framework integrating the two major mechanisms regulating AMPAR delivery at postsynapses (i.e., surface diffusion and internal recycling) is lacking. To this aim, we built a model based on numerical trajectories of individual AMPARs, including free diffusion in the extrasynaptic space, confinement in the synapse, and trapping at the postsynaptic density (PSD) through reversible interactions with scaffold proteins. The AMPAR/scaffold kinetic rates were adjusted by comparing computer simulations to single-particle tracking and fluorescence recovery after photobleaching experiments in primary neurons, in different conditions of synapse density and maturation. The model predicts that the steady-state AMPAR number at synapses is bidirectionally controlled by AMPAR/scaffold binding affinity and PSD size. To reveal the impact of recycling processes in basal conditions and upon synaptic potentiation or depression, spatially and temporally defined exocytic and endocytic events were introduced. The model predicts that local recycling of AMPARs close to the PSD, coupled to short-range surface diffusion, provides rapid control of AMPAR number at synapses. In contrast, because of long-range diffusion limitations, extrasynaptic recycling is intrinsically slower and less synapse-specific. Thus, by discriminating the relative contributions of AMPAR diffusion, trapping, and recycling events on spatial and temporal bases, this model provides unique insights on the dynamic regulation of synaptic strength. PMID:22331885

  9. Comprehensive Quantitative Model of Inner-Magnetosphere Dynamics

    NASA Technical Reports Server (NTRS)

    Wolf, Richard A.

    2002-01-01

    This report includes descriptions of papers, a thesis, and works still in progress which cover observations of space weather in the Earth's magnetosphere. The topics discussed include: 1) modelling of magnetosphere activity; 2) magnetic storms; 3) high energy electrons; and 4) plasmas.

  10. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    ERIC Educational Resources Information Center

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  11. A quantitative risk model for early lifecycle decision making

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  12. Quantitative models of magnetic and electric fields in the magnetosphere

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1975-01-01

    In order to represent the magnetic field B in the magnetosphere various auxiliary functions can be used: the current density, the scalar potential, toroidal and poloidal potentials, and Euler potentials -- or else, the components of B may be expanded directly. The most versatile among the linear representations is the one based on toroidal and poloidal potentials; it has seen relatively little use in the past but appears to be the most promising one for future work. Other classifications of models include simple testbed models vs. comprehensive ones and analytical vs. numerical representations. The electric field E in the magnetosphere is generally assumed to vary only slowly and to be orthogonal to B, allowing the use of a scalar potential which may be deduced from observations in the ionosphere, from the shape of the plasmapause, or from particle observations in synchronous orbits.

  13. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    PubMed

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  14. Quantitative Risk Modeling of Fire on the International Space Station

    NASA Technical Reports Server (NTRS)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  15. Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides

    PubMed Central

    Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  16. A quantitative assessment of torque-transducer models for magnetoreception

    PubMed Central

    Winklhofer, Michael; Kirschvink, Joseph L.

    2010-01-01

    Although ferrimagnetic material appears suitable as a basis of magnetic field perception in animals, it is not known by which mechanism magnetic particles may transduce the magnetic field into a nerve signal. Provided that magnetic particles have remanence or anisotropic magnetic susceptibility, an external magnetic field will exert a torque and may physically twist them. Several models of such biological magnetic-torque transducers on the basis of magnetite have been proposed in the literature. We analyse from first principles the conditions under which they are viable. Models based on biogenic single-domain magnetite prove both effective and efficient, irrespective of whether the magnetic structure is coupled to mechanosensitive ion channels or to an indirect transduction pathway that exploits the strayfield produced by the magnetic structure at different field orientations. On the other hand, torque-detector models that are based on magnetic multi-domain particles in the vestibular organs turn out to be ineffective. Also, we provide a generic classification scheme of torque transducers in terms of axial or polar output, within which we discuss the results from behavioural experiments conducted under altered field conditions or with pulsed fields. We find that the common assertion that a magnetoreceptor based on single-domain magnetite could not form the basis for an inclination compass does not always hold. PMID:20086054

  17. Afference copy as a quantitative neurophysiological model for consciousness.

    PubMed

    Cornelis, Hugo; Coop, Allan D

    2014-06-01

    Consciousness is a topic of considerable human curiosity with a long history of philosophical analysis and debate. We consider there is nothing particularly complicated about consciousness when viewed as a necessary process of the vertebrate nervous system. Here, we propose a physiological "explanatory gap" is created during each present moment by the temporal requirements of neuronal activity. The gap extends from the time exteroceptive and proprioceptive stimuli activate the nervous system until they emerge into consciousness. During this "moment", it is impossible for an organism to have any conscious knowledge of the ongoing evolution of its environment. In our schematic model, a mechanism of "afference copy" is employed to bridge the explanatory gap with consciously experienced percepts. These percepts are fabricated from the conjunction of the cumulative memory of previous relevant experience and the given stimuli. They are structured to provide the best possible prediction of the expected content of subjective conscious experience likely to occur during the period of the gap. The model is based on the proposition that the neural circuitry necessary to support consciousness is a product of sub/preconscious reflexive learning and recall processes. Based on a review of various psychological and neurophysiological findings, we develop a framework which contextualizes the model and briefly discuss further implications. PMID:25012715

  18. Quantitative empirical model of the magnetospheric flux-transfer process

    SciTech Connect

    Holzer, R.E.; McPherron, R.L.; Hardy, D.A.

    1986-03-01

    A simple model for estimating the open flux in the polar cap was based on precipitating electron data from polar orbiting satellites. This model was applied in the growth phase of two substorms on March 27, 1979, to determine the fraction of the flux of the southward IMF which merged at the forward magnetopause, contributing to the polar cap flux. The effective merging efficiency at the forward magnetopause was found to be 0.19 + or - 0.03 under average solar wind conditions. The westward electrojet current during the expansion and recovery phases of the same substorms was approximately proportional to the time rate of decrease of polar flux due to merging in the tail. An empirical model for calculating polar-cap flux changes using the merging at the forward magnetopause for estimating increases and the westward electrojet for decreases was compared with observed changes in the polar-cap flux. Agreement between the predicted and observed changes in the polar-cap flux was tested over an interval of 8 hours. The advantages and limitations of the method are discussed.

  19. Quantitative description of realistic wealth distributions by kinetic trading models

    NASA Astrophysics Data System (ADS)

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x-(1+α) at the high end, where, in general, α is greater than 1 (Pareto’s law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets.

  20. Quantitative description of realistic wealth distributions by kinetic trading models.

    PubMed

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x(-)(1+alpha) at the high end, where, in general, alpha is greater than 1 (Pareto's law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets. PMID:18999570

  1. Concentric Coplanar Capacitive Sensor System with Quantitative Model

    NASA Technical Reports Server (NTRS)

    Bowler, Nicola (Inventor); Chen, Tianming (Inventor)

    2014-01-01

    A concentric coplanar capacitive sensor includes a charged central disc forming a first electrode, an outer annular ring coplanar with and outer to the charged central disc, the outer annular ring forming a second electrode, and a gap between the charged central disc and the outer annular ring. The first electrode and the second electrode may be attached to an insulative film. A method provides for determining transcapacitance between the first electrode and the second electrode and using the transcapacitance in a model that accounts for a dielectric test piece to determine inversely the properties of the dielectric test piece.

  2. [Establishment of simultaneous quantitative model of five alkaloids from Corydalis Rhizoma by near-infrared spectrometry].

    PubMed

    Yang, Li-xin; Zhang, Yong-xin; Feng, Wei-hong; Li, Chun

    2015-10-01

    This paper established a near-infrared spectroscopy quantitative model for simultaneous quantitative analysis of coptisine hydrochloride, dehydrocorydaline, tetrahydropalmatine, corydaline and glaucine in Corydalis Rhizoma. Firstly, the chemical values of the five components in Corydalis Rhizoma were determined by the reversed-phase high performance liquid chromatography (RP-HPLC) with UV detection. Then, the quantitative calibration model was established and optimized by fourier transformation near-infrared spectroscopy (NIRS) combined with partial least square (PLS) regression. The calibration model was evaluated by correlation coefficient (r), the root-mean-square error of calibration (RMSEC) and the root mean square of cross-validation (RMSECV) of the calibration model, as well as the correlation coefficient (r) and the root mean square of prediction (RMSEP) of prediction model. For the quantitative calibration model, the r, RMSEC and RMSECV of coptisine hydrochloride, dehydrocorydaline, tetrahydropalmatine, corydaline and glaucine were 0.941 0, 0.972 7, 0.964 3, 0.978 1, 0.979 9; 0.006 7, 0.003 5, 0.005 9, 0.002 8, 0.005 9; and 0.015, 0.011, 0.020, 0.010 and 0.022, respectively. For the prediction model, the r and RMSEP of the five components were 0.916 6, 0.942 9, 0.943 6, 0.916 7, 0.914 5; and 0.009, 0.006 6, 0.007 5, 0.006 9 and 0.011, respectively. The established near-infrared spectroscopy quantitative model is relatively stable, accurate and reliable for the simultaneous quantitative analysis of the five alkaloids, and is expected to be used for the rapid determination of the five components in crude drug of Corydalis Rhizoma. PMID:26975110

  3. Existence, uniqueness and stability of positive periodic solution for a nonlinear prey-competition model with delays

    NASA Astrophysics Data System (ADS)

    Chen, Fengde; Xie, Xiangdong; Shi, Jinlin

    2006-10-01

    A nonlinear periodic predator-prey model with m-preys and (n-m)-predators and delays is proposed in this paper, which can be seen as the modification of the traditional Lotka-Volterra prey-competition model. Sufficient conditions which guarantee the existence of a unique globally attractive positive periodic solution of the system are obtained.

  4. A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction

    ERIC Educational Resources Information Center

    Cohen, Anat; Nachmias, Rafi

    2006-01-01

    This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…

  5. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the exemplar…

  6. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    ERIC Educational Resources Information Center

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  7. Canalization, genetic assimilation and preadaptation. A quantitative genetic model.

    PubMed Central

    Eshel, I; Matessi, C

    1998-01-01

    We propose a mathematical model to analyze the evolution of canalization for a trait under stabilizing selection, where each individual in the population is randomly exposed to different environmental conditions, independently of its genotype. Without canalization, our trait (primary phenotype) is affected by both genetic variation and environmental perturbations (morphogenic environment). Selection of the trait depends on individually varying environmental conditions (selecting environment). Assuming no plasticity initially, morphogenic effects are not correlated with the direction of selection in individual environments. Under quite plausible assumptions we show that natural selection favors a system of canalization that tends to repress deviations from the phenotype that is optimal in the most common selecting environment. However, many experimental results, dating back to Waddington and others, indicate that natural canalization systems may fail under extreme environments. While this can be explained as an impossibility of the system to cope with extreme morphogenic pressure, we show that a canalization system that tends to be inactivated in extreme environments is even more advantageous than rigid canalization. Moreover, once this adaptive canalization is established, the resulting evolution of primary phenotype enables substantial preadaptation to permanent environmental changes resembling extreme niches of the previous environment. PMID:9691063

  8. A quantitative confidence signal detection model: 1. Fitting psychometric functions.

    PubMed

    Yi, Yongwoo; Merfeld, Daniel M

    2016-04-01

    Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks. PMID:26763777

  9. Quantitative nonlinearity analysis of model-scale jet noise

    NASA Astrophysics Data System (ADS)

    Miller, Kyle G.; Reichman, Brent O.; Gee, Kent L.; Neilsen, Tracianne B.; Atchley, Anthony A.

    2015-10-01

    The effects of nonlinearity on the power spectrum of jet noise can be directly compared with those of atmospheric absorption and geometric spreading through an ensemble-averaged, frequency-domain version of the generalized Burgers equation (GBE) [B. O. Reichman et al., J. Acoust. Soc. Am. 136, 2102 (2014)]. The rate of change in the sound pressure level due to the nonlinearity, in decibels per jet nozzle diameter, is calculated using a dimensionless form of the quadspectrum of the pressure and the squared-pressure waveforms. In this paper, this formulation is applied to atmospheric propagation of a spherically spreading, initial sinusoid and unheated model-scale supersonic (Mach 2.0) jet data. The rate of change in level due to nonlinearity is calculated and compared with estimated effects due to absorption and geometric spreading. Comparing these losses with the change predicted due to nonlinearity shows that absorption and nonlinearity are of similar magnitude in the geometric far field, where shocks are present, which causes the high-frequency spectral shape to remain unchanged.

  10. A quantitative model of the biogeochemical transport of iodine

    NASA Astrophysics Data System (ADS)

    Weng, H.; Ji, Z.; Weng, J.

    2010-12-01

    Iodine deficiency disorders (IDD) are among the world’s most prevalent public health problems yet preventable by dietary iodine supplements. To better understand the biogeochemical behavior of iodine and to explore safer and more efficient ways of iodine supplementation as alternatives to iodized salt, we studied the behavior of iodine as it is absorbed, accumulated and released by plants. Using Chinese cabbage as a model system and the 125I tracing technique, we established that plants uptake exogenous iodine from soil, most of which are transported to the stem and leaf tissue. The level of absorption of iodine by plants is dependent on the iodine concentration in soil, as well as the soil types that have different iodine-adsorption capacity. The leaching experiment showed that the remainder soil content of iodine after leaching is determined by the iodine-adsorption ability of the soil and the pH of the leaching solution, but not the volume of leaching solution. Iodine in soil and plants can also be released to the air via vaporization in a concentration-dependent manner. This study provides a scientific basis for developing new methods to prevent IDD through iodized vegetable production.

  11. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    PubMed

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34

  12. Global existence of solutions and uniform persistence of a diffusive predator-prey model with prey-taxis

    NASA Astrophysics Data System (ADS)

    Wu, Sainan; Shi, Junping; Wu, Boying

    2016-04-01

    This paper proves the global existence and boundedness of solutions to a general reaction-diffusion predator-prey system with prey-taxis defined on a smooth bounded domain with no-flux boundary condition. The result holds for domains in arbitrary spatial dimension and small prey-taxis sensitivity coefficient. This paper also proves the existence of a global attractor and the uniform persistence of the system under some additional conditions. Applications to models from ecology and chemotaxis are discussed.

  13. Existence and analyticity of eigenvalues of a two-channel molecular resonance model

    NASA Astrophysics Data System (ADS)

    Lakaev, S. N.; Latipov, Sh. M.

    2011-12-01

    We consider a family of operators Hγμ(k), k ∈ mathbb{T}^d := (-π,π]d, associated with the Hamiltonian of a system consisting of at most two particles on a d-dimensional lattice ℤd, interacting via both a pair contact potential (μ > 0) and creation and annihilation operators (γ > 0). We prove the existence of a unique eigenvalue of Hγμ(k), k ∈ mathbb{T}^d , or its absence depending on both the interaction parameters γ,μ ≥ 0 and the system quasimomentum k ∈ mathbb{T}^d . We show that the corresponding eigenvector is analytic. We establish that the eigenvalue and eigenvector are analytic functions of the quasimomentum k ∈ mathbb{T}^d in the existence domain G ⊂ mathbb{T}^d.

  14. Using Item-Type Performance Covariance to Improve the Skill Model of an Existing Tutor

    ERIC Educational Resources Information Center

    Pavlik, Philip I., Jr.; Cen, Hao; Wu, Lili; Koedinger, Kenneth R.

    2008-01-01

    Using data from an existing pre-algebra computer-based tutor, we analyzed the covariance of item-types with the goal of describing a more effective way to assign skill labels to item-types. Analyzing covariance is important because it allows us to place the skills in a related network in which we can identify the role each skill plays in learning…

  15. Existence and large time behavior for a stochastic model of modified magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Razafimandimby, Paul André; Sango, Mamadou

    2015-10-01

    In this paper, we study a system of nonlinear stochastic partial differential equations describing the motion of turbulent non-Newtonian media in the presence of fluctuating magnetic field. The system is basically obtained by a coupling of the dynamical equations of a non-Newtonian fluids having p-structure and the Maxwell equations. We mainly show the existence of weak martingale solutions and their exponential decay when time goes to infinity.

  16. Existence Theorems for Vortices in the Aharony-Bergman-Jaferis-Maldacena Model

    NASA Astrophysics Data System (ADS)

    Han, Xiaosen; Yang, Yisong

    2015-01-01

    A series of sharp existence and uniqueness theorems are established for the multiple vortex solutions in the supersymmetric Chern-Simons-Higgs theory formalism of Aharony, Bergman, Jaferis, and Maldacena, for which the Higgs bosons and Dirac fermions lie in the bifundamental representation of the general gauge symmetry group . The governing equations are of the BPS type and derived by Kim, Kim, Kwon, and Nakajima in the mass-deformed framework labeled by a continuous parameter.

  17. Structural and Stratigraphic Evolution of the Iberia and Newfoundland Rifted Margins: A Quantitative Modeling Approach

    NASA Astrophysics Data System (ADS)

    Mohn, G.; Karner, G. D.; Manatschal, G.; Johnson, C. A.

    2014-12-01

    Rifted margins develop generally through polyphased extensional events leading eventually to break-up. We investigate the spatial and temporal evolution of the Iberia-Newfoundland rifted margin from its Permian post-orogenic stage to early Cretaceous break-up. We have applied Quantitative Basin Analysis to integrate seismic stratigraphic interpretations and drill hole data of representative sections across the Iberia-Newfoundland margins with kinematic models for the thinning of the lithosphere and subsequent isostatic readjustment. Our goal is to predict the distribution of extension and thinning, environments of deposition, crustal structure and subsidence history as functions of space and time. The first sediments deposited on the Iberian continental crust were in response to Permian lithospheric thinning, associated with magmatic underplating and subsequent thermal re-equilibration of the lithosphere. During late Triassic-early Jurassic rifting, a broadly distributed depth-independent lithospheric extension occurred, followed by late Jurassic rifting that increasingly focused with time and became depth-dependent during the early Cretaceous. However, there exists a temporality in the along-strike deformation of the Iberia-Newfoundland margin: significant Valanginian-Hauterivian deformation characterizes the northern Galicia Bank-Flemish Cap while the southern Iberian-Newfoundland region is characterized by Tithonian-early Berriasian extension. Deformation localized with time on both margins leading to late Aptian break-up. To match the distribution and magnitude of subsidence across the profiles requires significant thinning of middle/lower crustal level and subcontinental lithospheric mantle, leading to the formation of the hyper-extended domains. The late-stage deformation of both margins was characterized by a predominantly brittle deformation of the residual continental crust, leading to exhumation of subcontinental mantle and ultimately to seafloor

  18. Quantitative analysis of free and bonded forms of volatile sulfur compouds in wine. Basic methodologies and evidences showing the existence of reversible cation-complexed forms.

    PubMed

    Franco-Luesma, Ernesto; Ferreira, Vicente

    2014-09-12

    This paper examines first some basic aspects critical to the analysis of Volatile Sulfur Compounds (VSCs), such as the analytical characteristics of the GC-pFPD system and the stability of the different standard solutions required for a proper calibration. Following, a direct static headspace analytical method for the determination of exclusively free forms of VSCs has been developed. Method repeatability is better than 4%, detection limits for main analytes are below 0.5μgL(-1), and the method dynamic linear range (r(2)>0.99) is expanded by controlling the split ratio in the chromatographic inlet to cover the natural range of occurrence of these compounds in wines. The method gives reliable estimates of headspace concentrations but, as expected, suffers from strong matrix effects with recoveries ranging from 0 to 100% or from 60 to 100 in the cases of H2S and the other mercaptans, respectively. This demonstrates the existence of strong interactions of these compounds with different matrix components. The complexing ability of Cu(2+) and to a lower extent Fe(2+) and Zn(2+) has been experimentally checked. A previously developed method in which the wine is strongly diluted with brine and the volatiles are preconcentrated by HS-SPME, was found to give a reliable estimation of the total amount (free+complexed) of mercaptans, demonstrating that metal-mercaptan complexes are reversible. The comparative analysis of different wines by the two procedures reveals that in normal wines H2S and methanethiol can be complexed at levels above 99%, with averages around 97% for H2S and 75% for methanethiol, while thioethers such as dimethyl sulfide (DMS) are not complexed. Overall, the proposed strategy may be generalized to understand problems caused by VSCs in different matrices. PMID:25064535

  19. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study.

    PubMed

    Borji, Ali; Sihite, Dicky N; Itti, Laurent

    2013-01-01

    Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as "visual saliency." Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains. PMID:22868572

  20. The Power of a Good Idea: Quantitative Modeling of the Spread of Ideas from Epidemiological Models

    SciTech Connect

    Bettencourt, L. M. A.; Cintron-Arias, A.; Kaiser, D. I.; Castillo-Chavez, C.

    2005-05-05

    The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.

  1. Had the Planet Mars Not Existed: Kepler's Equant Model and Its Physical Consequences

    ERIC Educational Resources Information Center

    Bracco, C.; Provost, J.P.

    2009-01-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal…

  2. Adapting Existing Spatial Data Sets to New Uses: An Example from Energy Modeling

    SciTech Connect

    Johanesson, G; Stewart, J S; Barr, C; Sabeff, L B; George, R; Heimiller, D; Milbrandt, A

    2006-06-23

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, and economic projections. These data are available at various spatial and temporal scales, which may be different from those needed by the energy modeling community. If the translation from the original format to the format required by the energy researcher is incorrect, then resulting models can produce misleading conclusions. This is of increasing importance, because of the fine resolution data required by models for new alternative energy sources such as wind and distributed generation. This paper addresses the matter by applying spatial statistical techniques which improve the usefulness of spatial data sets (maps) that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) imputing missing data and (3) merging spatial data sets.

  3. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    NASA Astrophysics Data System (ADS)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2016-07-01

    coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.

  4. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    NASA Astrophysics Data System (ADS)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2015-09-01

    also provided insights into the EDII, in particular highlighting drought events where missing impact reports reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis, and highlights the important role that quantitative analysis with impacts data can have in providing "ground truth" for drought indicators alongside more traditional stakeholder-led approaches.

  5. Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models

    NASA Astrophysics Data System (ADS)

    de Diesbach, Pablo Brice

    This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.

  6. Developmental modeling effects on the quantitative and qualitative aspects of motor performance.

    PubMed

    McCullagh, P; Stiehl, J; Weiss, M R

    1990-12-01

    The purpose of the present experiment was to replicate and extend previous developmental modeling research by examining the qualitative as well as quantitative aspects of motor performance. Eighty females of two age groups (5-0 to 6-6 and 7-6 to 9-0 years) were randomly assigned to conditions within a 2 x 2 x 2 (Age x Model Type x Rehearsal) factorial design. Children received either verbal instructions only (no model) or a visual demonstration with experimenter-given verbal cues (verbal model) of a five-part dance skill sequence. Children were either prompted to verbally rehearse before skill execution or merely asked to reproduce the sequence without prompting. Both quantitative (order) and qualitative (form) performances were assessed. Results revealed a significant age main effect for both order and form performance, with older children performing better than younger children. A model type main effect was also found for both order and form performance. The verbal model condition produced better qualitative performance, whereas the no model condition resulted in better quantitative scores. These results are discussed in terms of differential coding strategies that may influence task components in modeling. PMID:2132893

  7. Can existing climate models be used to study anthropogenic changes in tropical cyclone climate

    SciTech Connect

    Broccoli, A.J.; Manabe, S.

    1990-10-01

    The utility of current generation climate models for studying the influence of greenhouse warming on the tropical storm climatology is examined. A method developed to identify tropical cyclones is applied to a series of model integrations. The global distribution of tropical storms is simulated by these models in a generally realistic manner. While the model resolution is insufficient to reproduce the fine structure of tropical cyclones, the simulated storms become more realistic as resolution is increased. To obtain a preliminary estimate of the response of the tropical cyclone climatology, CO{sub 2} was doubled using models with varying cloud treatments and different horizontal resolutions. In the experiment with prescribed cloudiness, the number of storm-days, a combined measure of the number and duration of tropical storms, undergoes a statistically significant reduction of the number of storm-days is indicated in the experiment with cloud feedback. In both cases the response is independent of horizontal resolution. While the inconclusive nature of these experimental results highlights the uncertainties that remain in examining the details of greenhouse-gas induced climate change, the ability of the models to qualitatively simulate the tropical storm climatology suggests that they are appropriate tools for this problem.

  8. Global existence analysis for degenerate energy-transport models for semiconductors

    NASA Astrophysics Data System (ADS)

    Zamponi, Nicola; Jüngel, Ansgar

    2015-04-01

    A class of energy-transport equations without electric field under mixed Dirichlet-Neumann boundary conditions is analyzed. The system of degenerate and strongly coupled parabolic equations for the particle density and temperature arises in semiconductor device theory. The global-in-time existence of weak nonnegative solutions is shown. The proof consists of a variable transformation and a semi-discretization in time such that the discretized system becomes elliptic and semilinear. Positive approximate solutions are obtained by Stampacchia truncation arguments and a new cut-off test function. Nonlogarithmic entropy inequalities yield gradient estimates which allow for the limit of vanishing time step sizes. Exploiting the entropy inequality, the long-time convergence of the weak solutions to the constant steady state is proved. Because of the lack of appropriate convex Sobolev inequalities to estimate the entropy dissipation, only an algebraic decay rate is obtained. Numerical experiments indicate that the decay rate is typically exponential.

  9. Building Coalitions To Provide HIV Legal Advocacy Services: Utilizing Existing Disability Models. AIDS Technical Report, No. 5.

    ERIC Educational Resources Information Center

    Harvey, David C.; Ardinger, Robert S.

    This technical report is part of a series on AIDS/HIV (Acquired Immune Deficiency Syndrome/Human Immunodeficiency Virus) and is intended to help link various legal advocacy organizations providing services to persons with mental illness or developmental disabilities. This report discusses strategies to utilize existing disability models for…

  10. Utilization of data estimation via existing models, within a tiered data quality system, for populating species sensitivity distributions

    EPA Science Inventory

    The acquisition toxicity test data of sufficient quality from open literature to fulfill taxonomic diversity requirements can be a limiting factor in the creation of new 304(a) Aquatic Life Criteria. The use of existing models (WebICE and ACE) that estimate acute and chronic eff...