Science.gov

Sample records for existing models quantitatively

  1. Comment on "Can existing models quantitatively describe the mixing behavior of acetone with water" [J. Chem. Phys. 130, 124516 (2009)].

    PubMed

    Kang, Myungshim; Perera, Aurelien; Smith, Paul E

    2009-10-21

    A recent publication indicated that simulations of acetone-water mixtures using the KBFF model for acetone indicate demixing at mole fractions less than 0.28 of acetone, in disagreement with experiment and two previously published studies. Here, we indicate some inconsistancies in the current study which could help to explain these differences. PMID:20568888

  2. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  3. University Students' Research Orientations: Do Negative Attitudes Exist toward Quantitative Methods?

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2005-01-01

    This paper examines university social science and education students' views of research methodology, especially asking whether a negative research orientation towards quantitative methods exists. Finnish (n = 196) and US (n = 122) students answered a questionnaire concerning their views on quantitative, qualitative, empirical, and theoretical…

  4. LDEF data: Comparisons with existing models

    NASA Technical Reports Server (NTRS)

    Coombs, Cassandra R.; Watts, Alan J.; Wagner, John D.; Atkinson, Dale R.

    1993-01-01

    The relationship between the observed cratering impact damage on the Long Duration Exposure Facility (LDEF) versus the existing models for both the natural environment of micrometeoroids and the man-made debris was investigated. Experimental data was provided by several LDEF Principal Investigators, Meteoroid and Debris Special Investigation Group (M&D SIG) members, and by the Kennedy Space Center Analysis Team (KSC A-Team) members. These data were collected from various aluminum materials around the LDEF satellite. A PC (personal computer) computer program, SPENV, was written which incorporates the existing models of the Low Earth Orbit (LEO) environment. This program calculates the expected number of impacts per unit area as functions of altitude, orbital inclination, time in orbit, and direction of the spacecraft surface relative to the velocity vector, for both micrometeoroids and man-made debris. Since both particle models are couched in terms of impact fluxes versus impactor particle size, and much of the LDEF data is in the form of crater production rates, scaling laws have been used to relate the two. Also many hydrodynamic impact computer simulations were conducted, using CTH, of various impact events, that identified certain modes of response, including simple metallic target cratering, perforations and delamination effects of coatings.

  5. Quantitative Predictive Models for Systemic Toxicity (SOT)

    EPA Science Inventory

    Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...

  6. The mathematics of cancer: integrating quantitative models.

    PubMed

    Altrock, Philipp M; Liu, Lin L; Michor, Franziska

    2015-12-01

    Mathematical modelling approaches have become increasingly abundant in cancer research. The complexity of cancer is well suited to quantitative approaches as it provides challenges and opportunities for new developments. In turn, mathematical modelling contributes to cancer research by helping to elucidate mechanisms and by providing quantitative predictions that can be validated. The recent expansion of quantitative models addresses many questions regarding tumour initiation, progression and metastases as well as intra-tumour heterogeneity, treatment responses and resistance. Mathematical models can complement experimental and clinical studies, but also challenge current paradigms, redefine our understanding of mechanisms driving tumorigenesis and shape future research in cancer biology.

  7. Integrated Environmental Modeling: Quantitative Microbial Risk Assessment

    EPA Science Inventory

    The presentation discusses the need for microbial assessments and presents a road map associated with quantitative microbial risk assessments, through an integrated environmental modeling approach. A brief introduction and the strengths of the current knowledge are illustrated. W...

  8. 6 Principles for Quantitative Reasoning and Modeling

    ERIC Educational Resources Information Center

    Weber, Eric; Ellis, Amy; Kulow, Torrey; Ozgur, Zekiye

    2014-01-01

    Encouraging students to reason with quantitative relationships can help them develop, understand, and explore mathematical models of real-world phenomena. Through two examples--modeling the motion of a speeding car and the growth of a Jactus plant--this article describes how teachers can use six practical tips to help students develop quantitative…

  9. Quantitative Modeling of Earth Surface Processes

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.

  10. More details...
  11. Quantitative risk modeling in aseptic manufacture.

    PubMed

    Tidswell, Edward C; McGarvey, Bernard

    2006-01-01

    Expedient risk assessment of aseptic manufacturing processes offers unique opportunities for improved and sustained assurance of product quality. Contemporary risk assessments applied to aseptic manufacturing processes, however, are commonly handicapped by assumptions and subjectivity, leading to inexactitude. Quantitative risk modeling augmented with Monte Carlo simulations represents a novel, innovative, and more efficient means of risk assessment. This technique relies upon fewer assumptions and removes subjectivity to more swiftly generate an improved, more realistic, quantitative estimate of risk. The fundamental steps and requirements for an assessment of the risk of bioburden ingress into aseptically manufactured products are described. A case study exemplifies how quantitative risk modeling and Monte Carlo simulations achieve a more rapid and improved determination of the risk of bioburden ingress during the aseptic filling of a parenteral product. Although application of quantitative risk modeling is described here purely for the purpose of process improvement, the technique has far wider relevance in the assisted disposition of batches, cleanroom management, and the utilization of real-time data from rapid microbial monitoring technologies. PMID:17089696

  12. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  13. A quantitative comparison of Calvin-Benson cycle models.

    PubMed

    Arnold, Anne; Nikoloski, Zoran

    2011-12-01

    The Calvin-Benson cycle (CBC) provides the precursors for biomass synthesis necessary for plant growth. The dynamic behavior and yield of the CBC depend on the environmental conditions and regulation of the cellular state. Accurate quantitative models hold the promise of identifying the key determinants of the tightly regulated CBC function and their effects on the responses in future climates. We provide an integrative analysis of the largest compendium of existing models for photosynthetic processes. Based on the proposed ranking, our framework facilitates the discovery of best-performing models with regard to metabolomics data and of candidates for metabolic engineering.

  14. Quantitative model of record stratospheric freefall

    NASA Astrophysics Data System (ADS)

    Colino, José M.; Barbero, Antonio J.

    2013-07-01

    Before the record stratospheric jump of Felix Baumgartner last October, the study of high-altitude freefall was hampered by the lack of sufficient or reliable experimental data. His feat and the organizers' publishing of GPS data have provided us with a nice experiment to study on an introductory physics course. A quantitative approach to freefall dynamics requires only a spreadsheet model incorporating air properties calculated with the standard atmosphere model and a speed-dependent drag force in a transonic-supersonic regime through a variable CDA product, where CD is the drag coefficient and A is the maximum skydiver cross-section area.

  15. Quantitative indices of autophagy activity from minimal models

    PubMed Central

    2014-01-01

    Background A number of cellular- and molecular-level studies of autophagy assessment have been carried out with the help of various biochemical and morphological indices. Still there exists ambiguity for the assessment of the autophagy status and of the causal relationship between autophagy and related cellular changes. To circumvent such difficulties, we probe new quantitative indices of autophagy which are important for defining autophagy activation and further assessing its roles associated with different physiopathological states. Methods Our approach is based on the minimal autophagy model that allows us to understand underlying dynamics of autophagy from biological experiments. Specifically, based on the model, we reconstruct the experimental context-specific autophagy profiles from the target autophagy system, and two quantitative indices are defined from the model-driven profiles. The indices are then applied to the simulation-based analysis, for the specific and quantitative interpretation of the system. Results Two quantitative indices measuring autophagy activities in the induction of sequestration fluxes and in the selective degradation are proposed, based on the model-driven autophagy profiles such as the time evolution of autophagy fluxes, levels of autophagosomes/autolysosomes, and corresponding cellular changes. Further, with the help of the indices, those biological experiments of the target autophagy system have been successfully analyzed, implying that the indices are useful not only for defining autophagy activation but also for assessing its role in a specific and quantitative manner. Conclusions Such quantitative autophagy indices in conjunction with the computer-aided analysis should provide new opportunities to characterize the causal relationship between autophagy activity and the corresponding cellular change, based on the system-level understanding of the autophagic process at good time resolution, complementing the current in vivo and in

  16. Training of Existing Workers: Issues, Incentives and Models. Support Document

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This document was produced by the authors based on their research for the report, "Training of Existing Workers: Issues, Incentives and Models," (ED495138) and is an added resource for further information. This support document is divided into the following sections: (1) The Retail Industry--A Snapshot; (2) Case Studies--Hardware, Retail Industry…

  17. Competitive speciation in quantitative genetic models.

    PubMed

    Drossel, B; Mckane, A

    2000-06-01

    We study sympatric speciation due to competition in an environment with a broad distribution of resources. We assume that the trait under selection is a quantitative trait, and that mating is assortative with respect to this trait. Our model alternates selection according to Lotka-Volterra-type competition equations, with reproduction using the ideas of quantitative genetics. The recurrence relations defined by these equations are studied numerically and analytically. We find that when a population enters a new environment, with a broad distribution of unexploited food sources, the population distribution broadens under a variety of conditions, with peaks at the edge of the distribution indicating the formation of subpopulations. After a long enough time period, the population can split into several subpopulations with little gene flow between them. PMID:10816369

  18. The existence of amorphous phase in Portland cements: Physical factors affecting Rietveld quantitative phase analysis

    SciTech Connect

    Snellings, Ruben Bazzoni, Amélie Scrivener, Karen

    2014-05-01

    Rietveld quantitative phase analysis has become a widespread tool for the characterization of Portland cement, both for research and production control purposes. One of the major remaining points of debate is whether Portland cements contain amorphous content or not. This paper presents detailed analyses of the amorphous phase contents in a set of commercial Portland cements, clinker, synthetic alite and limestone by Rietveld refinement of X-ray powder diffraction measurements using both external and internal standard methods. A systematic study showed that the sample preparation and comminution procedure is closely linked to the calculated amorphous contents. Particle size reduction by wet-grinding lowered the calculated amorphous contents to insignificant quantities for all materials studied. No amorphous content was identified in the final analysis of the Portland cements under investigation.

  19. On the existence of monodromies for the Rabi model

    NASA Astrophysics Data System (ADS)

    Carneiro da Cunha, Bruno; Carvalho de Almeida, Manuela; Rabelo de Queiroz, Amílcar

    2016-05-01

    We discuss the existence of monodromies associated with the singular points of the eigenvalue problem for the Rabi model. The complete control of the full monodromy data requires the taming of the Stokes phenomenon associated with the unique irregular singular point. The monodromy data, in particular, the composite monodromy, are written in terms of the parameters of the model via the isomonodromy method and the τ function of the Painlevé V. These data provide a systematic way to obtain the quantized spectrum of the Rabi model.

  20. Quantitative risk modelling for new pharmaceutical compounds.

    PubMed

    Tang, Zhengru; Taylor, Mark J; Lisboa, Paulo; Dyas, Mark

    2005-11-15

    The process of discovering and developing new drugs is long, costly and risk-laden. Faced with a wealth of newly discovered compounds, industrial scientists need to target resources carefully to discern the key attributes of a drug candidate and to make informed decisions. Here, we describe a quantitative approach to modelling the risk associated with drug development as a tool for scenario analysis concerning the probability of success of a compound as a potential pharmaceutical agent. We bring together the three strands of manufacture, clinical effectiveness and financial returns. This approach involves the application of a Bayesian Network. A simulation model is demonstrated with an implementation in MS Excel using the modelling engine Crystal Ball. PMID:16257374

  21. Magnetospheric mapping with quantitative geomagnetic field models

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Mead, G. D.

    1973-01-01

    The Mead-Fairfield geomagnetic field models were used to trace field lines between the outer magnetosphere and the earth's surface. The results are presented in terms of ground latitude and local time contours projected to the equatorial plane and into the geomagnetic tail. With these contours various observations can be mapped along field lines between high and low altitudes. Low altitudes observations of the polar cap boundary, the polar cusp, the energetic electron trapping boundary and the sunward convection region are projected to the equatorial plane and compared with the results of the model and with each other. The results provide quantitative support to the earlier suggestions that the trapping boundary is associated with the last closed field line in the sunward hemisphere, the polar cusp is associated with the region of the last closed field line, and the polar cap projects to the geomagnetic tail and has a low latitude boundary corresponding to the last closed field line.

  1. Quantitative bioluminescence imaging of mouse tumor models.

    PubMed

    Tseng, Jen-Chieh; Kung, Andrew L

    2015-01-05

    Bioluminescence imaging (BLI) has become an essential technique for preclinical evaluation of anticancer therapeutics and provides sensitive and quantitative measurements of tumor burden in experimental cancer models. For light generation, a vector encoding firefly luciferase is introduced into human cancer cells that are grown as tumor xenografts in immunocompromised hosts, and the enzyme substrate luciferin is injected into the host. Alternatively, the reporter gene can be expressed in genetically engineered mouse models to determine the onset and progression of disease. In addition to expression of an ectopic luciferase enzyme, bioluminescence requires oxygen and ATP, thus only viable luciferase-expressing cells or tissues are capable of producing bioluminescence signals. Here, we summarize a BLI protocol that takes advantage of advances in hardware, especially the cooled charge-coupled device camera, to enable detection of bioluminescence in living animals with high sensitivity and a large dynamic range.

  2. Are existing snow microwave emission models so different ?

    NASA Astrophysics Data System (ADS)

    Picard, G.; Loewe, H.; Sandells, M. J.; Durand, M. T.; Pan, J.; Royer, A.; Mätzler, C.; Floury, N.

    2015-12-01

    Several models to compute the thermal emission in the microwave domain of snow-covered areas have been developed in the past few decades and have become very popular, like HUT, MEMLS and a few others based on the DMRT theory. These models have many differences between each other. Numerous studies have exploited one of them and have drawn conclusions about their skills or used them to retrieve snow properties or assimilate observations, etc. Nevertheless the portability of this knowledge to the other models is a concern. An increasing number of studies have also compared pairs of these models to show, in specific conditions, the differences between simulations and skills with respect to observations. There is a consensus on the fact that none of these models performs always (or sufficiently often) better than all the others. Nevertheless, the strategies developed to perform these comparisons are so dependent on how the model differences are handled, that the scope of numerically-based comparisons is limited. The diversity of investigated snow conditions and the ground truth uncertainties are also limiting factors. With the raising need to consolidate its findings, the snow passive microwave remote sensing community is eager to understand the profound similarities and differences of their models and to this end has started to re-cast the different model formulations in a common framework. In this work, we address this question for a few specific points of the models: the description of the micro-structure (grain size parameters and underlying assumption) and the solution method of the radiative transfer theory (multi-stream versus N-stream and phase function representation). We show that existing models are not so different from each other, especially when consistent parameters (or representations) are used. At last, we present propositions to build a new-generation model that would be sufficiently modular to encompass the (small) diversity of current models.

  3. Quantitative mass distribution models for Mare Orientale

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.; Smith, J. C.

    1976-01-01

    Six theoretical models for the mass distribution of Mare Orientale were tested using five gravity profiles extracted from radio-tracking data of orbiting spacecraft. The models with surface mass and moho relief produced the best results. Although there is a mascon-type anomaly in the central maria region, Mare Orientale is a large negative gravity anomaly. This is produced primarily by the empty ring basin. Had the basin filled with maria material it seems likely that it would have produced a mascon such as those presently existing in flooded frontside circular basins.

  4. Existence of needle crystals in local models of solidification

    NASA Technical Reports Server (NTRS)

    Langer, J. S.

    1986-01-01

    The way in which surface tension acts as a singular perturbation to destroy the continuous family of needle-crystal solutions of the steady-state growth equations is analyzed in detail for two local models of solidification. All calculations are performed in the limit of small surface tension or, equivalently, small velocity. The basic mathematical ideas are introduced in connection with a quasilinear, isotropic version of the geometrical model of Brower et al., in which case the continuous family of solutions dissappears completely. The formalism is then applied to a simplified boundary-layer model with an anisotropic kinetic attachment coefficient. In the latter case, the solvability condition for the existence of needle crystals can be satisfied whenever the coefficient of anisotropy is arbitrarily small but nonzero.

  5. A transformative model for undergraduate quantitative biology education.

    PubMed

    Usher, David C; Driscoll, Tobin A; Dhurjati, Prasad; Pelesko, John A; Rossi, Louis F; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B

    2010-01-01

    The BIO2010 report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating a new interdisciplinary major, quantitative biology, designed for students interested in solving complex biological problems using advanced mathematical approaches. To develop the bio-calculus sections, the Department of Mathematical Sciences revised its three-semester calculus sequence to include differential equations in the first semester and, rather than using examples traditionally drawn from application domains that are most relevant to engineers, drew models and examples heavily from the life sciences. The curriculum of the B.S. degree in Quantitative Biology was designed to provide students with a solid foundation in biology, chemistry, and mathematics, with an emphasis on preparation for research careers in life sciences. Students in the program take core courses from biology, chemistry, and physics, though mathematics, as the cornerstone of all quantitative sciences, is given particular prominence. Seminars and a capstone course stress how the interplay of mathematics and biology can be used to explain complex biological systems. To initiate these academic changes required the identification of barriers and the implementation of solutions.

  6. Physiologically based quantitative modeling of unihemispheric sleep.

    PubMed

    Kedziora, D J; Abeysuriya, R G; Phillips, A J K; Robinson, P A

    2012-12-01

    Unihemispheric sleep has been observed in numerous species, including birds and aquatic mammals. While knowledge of its functional role has been improved in recent years, the physiological mechanisms that generate this behavior remain poorly understood. Here, unihemispheric sleep is simulated using a physiologically based quantitative model of the mammalian ascending arousal system. The model includes mutual inhibition between wake-promoting monoaminergic nuclei (MA) and sleep-promoting ventrolateral preoptic nuclei (VLPO), driven by circadian and homeostatic drives as well as cholinergic and orexinergic input to MA. The model is extended here to incorporate two distinct hemispheres and their interconnections. It is postulated that inhibitory connections between VLPO nuclei in opposite hemispheres are responsible for unihemispheric sleep, and it is shown that contralateral inhibitory connections promote unihemispheric sleep while ipsilateral inhibitory connections promote bihemispheric sleep. The frequency of alternating unihemispheric sleep bouts is chiefly determined by sleep homeostasis and its corresponding time constant. It is shown that the model reproduces dolphin sleep, and that the sleep regimes of humans, cetaceans, and fur seals, the latter both terrestrially and in a marine environment, require only modest changes in contralateral connection strength and homeostatic time constant. It is further demonstrated that fur seals can potentially switch between their terrestrial bihemispheric and aquatic unihemispheric sleep patterns by varying just the contralateral connection strength. These results provide experimentally testable predictions regarding the differences between species that sleep bihemispherically and unihemispherically. PMID:22960411

  7. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  8. Quantitative risk assessment modeling for nonhomogeneous urban road tunnels.

    PubMed

    Meng, Qiang; Qu, Xiaobo; Wang, Xinchang; Yuanita, Vivi; Wong, Siew Chee

    2011-03-01

    Urban road tunnels provide an increasingly cost-effective engineering solution, especially in compact cities like Singapore. For some urban road tunnels, tunnel characteristics such as tunnel configurations, geometries, provisions of tunnel electrical and mechanical systems, traffic volumes, etc. may vary from one section to another. These urban road tunnels that have characterized nonuniform parameters are referred to as nonhomogeneous urban road tunnels. In this study, a novel quantitative risk assessment (QRA) model is proposed for nonhomogeneous urban road tunnels because the existing QRA models for road tunnels are inapplicable to assess the risks in these road tunnels. This model uses a tunnel segmentation principle whereby a nonhomogeneous urban road tunnel is divided into various homogenous sections. Individual risk for road tunnel sections as well as the integrated risk indices for the entire road tunnel is defined. The article then proceeds to develop a new QRA model for each of the homogeneous sections. Compared to the existing QRA models for road tunnels, this section-based model incorporates one additional top event-toxic gases due to traffic congestion-and employs the Poisson regression method to estimate the vehicle accident frequencies of tunnel sections. This article further illustrates an aggregated QRA model for nonhomogeneous urban tunnels by integrating the section-based QRA models. Finally, a case study in Singapore is carried out.

  9. First Principles Quantitative Modeling of Molecular Devices

    NASA Astrophysics Data System (ADS)

    Ning, Zhanyu

    In this thesis, we report theoretical investigations of nonlinear and nonequilibrium quantum electronic transport properties of molecular transport junctions from atomistic first principles. The aim is to seek not only qualitative but also quantitative understanding of the corresponding experimental data. At present, the challenges to quantitative theoretical work in molecular electronics include two most important questions: (i) what is the proper atomic model for the experimental devices? (ii) how to accurately determine quantum transport properties without any phenomenological parameters? Our research is centered on these questions. We have systematically calculated atomic structures of the molecular transport junctions by performing total energy structural relaxation using density functional theory (DFT). Our quantum transport calculations were carried out by implementing DFT within the framework of Keldysh non-equilibrium Green's functions (NEGF). The calculated data are directly compared with the corresponding experimental measurements. Our general conclusion is that quantitative comparison with experimental data can be made if the device contacts are correctly determined. We calculated properties of nonequilibrium spin injection from Ni contacts to octane-thiolate films which form a molecular spintronic system. The first principles results allow us to establish a clear physical picture of how spins are injected from the Ni contacts through the Ni-molecule linkage to the molecule, why tunnel magnetoresistance is rapidly reduced by the applied bias in an asymmetric manner, and to what extent ab initio transport theory can make quantitative comparisons to the corresponding experimental data. We found that extremely careful sampling of the two-dimensional Brillouin zone of the Ni surface is crucial for accurate results in such a spintronic system. We investigated the role of contact formation and its resulting structures to quantum transport in several molecular

  10. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  11. Evaluation (not validation) of quantitative models.

    PubMed Central

    Oreskes, N

    1998-01-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  12. Evaluation (not validation) of quantitative models.

    PubMed

    Oreskes, N

    1998-12-01

    The present regulatory climate has led to increasing demands for scientists to attest to the predictive reliability of numerical simulation models used to help set public policy, a process frequently referred to as model validation. But while model validation may reveal useful information, this paper argues that it is not possible to demonstrate the predictive reliability of any model of a complex natural system in advance of its actual use. All models embed uncertainties, and these uncertainties can and frequently do undermine predictive reliability. In the case of lead in the environment, we may categorize model uncertainties as theoretical, empirical, parametrical, and temporal. Theoretical uncertainties are aspects of the system that are not fully understood, such as the biokinetic pathways of lead metabolism. Empirical uncertainties are aspects of the system that are difficult (or impossible) to measure, such as actual lead ingestion by an individual child. Parametrical uncertainties arise when complexities in the system are simplified to provide manageable model input, such as representing longitudinal lead exposure by cross-sectional measurements. Temporal uncertainties arise from the assumption that systems are stable in time. A model may also be conceptually flawed. The Ptolemaic system of astronomy is a historical example of a model that was empirically adequate but based on a wrong conceptualization. Yet had it been computerized--and had the word then existed--its users would have had every right to call it validated. Thus, rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid

  13. Toward quantitative modeling of silicon phononic thermocrystals

    SciTech Connect

    Lacatena, V.; Haras, M.; Robillard, J.-F. Dubois, E.; Monfray, S.; Skotnicki, T.

    2015-03-16

    The wealth of technological patterning technologies of deca-nanometer resolution brings opportunities to artificially modulate thermal transport properties. A promising example is given by the recent concepts of 'thermocrystals' or 'nanophononic crystals' that introduce regular nano-scale inclusions using a pitch scale in between the thermal phonons mean free path and the electron mean free path. In such structures, the lattice thermal conductivity is reduced down to two orders of magnitude with respect to its bulk value. Beyond the promise held by these materials to overcome the well-known “electron crystal-phonon glass” dilemma faced in thermoelectrics, the quantitative prediction of their thermal conductivity poses a challenge. This work paves the way toward understanding and designing silicon nanophononic membranes by means of molecular dynamics simulation. Several systems are studied in order to distinguish the shape contribution from bulk, ultra-thin membranes (8 to 15 nm), 2D phononic crystals, and finally 2D phononic membranes. After having discussed the equilibrium properties of these structures from 300 K to 400 K, the Green-Kubo methodology is used to quantify the thermal conductivity. The results account for several experimental trends and models. It is confirmed that the thin-film geometry as well as the phononic structure act towards a reduction of the thermal conductivity. The further decrease in the phononic engineered membrane clearly demonstrates that both phenomena are cumulative. Finally, limitations of the model and further perspectives are discussed.

  14. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  15. Existence of Periodic Solutions for a Modified Growth Solow Model

    NASA Astrophysics Data System (ADS)

    Fabião, Fátima; Borges, Maria João

    2010-10-01

    In this paper we analyze the dynamic of the Solow growth model with a Cobb-Douglas production function. For this purpose, we consider that the labour growth rate, L'(t)/L(t), is a T-periodic function, for a fixed positive real number T. We obtain the closed form solutions for the fundamental Solow equation with the new description of L(t). Using notions of the qualitative theory of ordinary differential equations and nonlinear functional analysis, we prove that there exists one T-periodic solution for the Solow equation. From the economic point of view this is a new result which allows a more realistic interpretation of the stylized facts.

  16. Comparative Application of Capacity Models for Seismic Vulnerability Evaluation of Existing RC Structures

    SciTech Connect

    Faella, C.; Lima, C.; Martinelli, E.; Nigro, E.

    2008-07-08

    Seismic vulnerability assessment of existing buildings is one of the most common tasks in which Structural Engineers are currently engaged. Since, its is often a preliminary step to approach the issue of how to retrofit non-seismic designed and detailed structures, it plays a key role in the successful choice of the most suitable strengthening technique. In this framework, the basic information for both seismic assessment and retrofitting is related to the formulation of capacity models for structural members. Plenty of proposals, often contradictory under the quantitative standpoint, are currently available within the technical and scientific literature for defining the structural capacity in terms of force and displacements, possibly with reference to different parameters representing the seismic response. The present paper shortly reviews some of the models for capacity of RC members and compare them with reference to two case studies assumed as representative of a wide class of existing buildings.

  17. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  18. Quantitative modeling and optimization of magnetic tweezers.

    PubMed

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H

    2009-06-17

    Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply > or = 40 pN stretching forces on approximately 1-microm tethered beads. PMID:19527664

  19. Quantitative assessment of computational models for retinotopic map formation

    PubMed Central

    Sterratt, David C; Cutts, Catherine S; Willshaw, David J; Eglen, Stephen J

    2014-01-01

    ABSTRACT Molecular and activity‐based cues acting together are thought to guide retinal axons to their terminal sites in vertebrate optic tectum or superior colliculus (SC) to form an ordered map of connections. The details of mechanisms involved, and the degree to which they might interact, are still not well understood. We have developed a framework within which existing computational models can be assessed in an unbiased and quantitative manner against a set of experimental data curated from the mouse retinocollicular system. Our framework facilitates comparison between models, testing new models against known phenotypes and simulating new phenotypes in existing models. We have used this framework to assess four representative models that combine Eph/ephrin gradients and/or activity‐based mechanisms and competition. Two of the models were updated from their original form to fit into our framework. The models were tested against five different phenotypes: wild type, Isl2‐EphA3 ki/ki, Isl2‐EphA3 ki/+, ephrin‐A2,A3,A5 triple knock‐out (TKO), and Math5 −/− (Atoh7). Two models successfully reproduced the extent of the Math5 −/− anteromedial projection, but only one of those could account for the collapse point in Isl2‐EphA3 ki/+. The models needed a weak anteroposterior gradient in the SC to reproduce the residual order in the ephrin‐A2,A3,A5 TKO phenotype, suggesting either an incomplete knock‐out or the presence of another guidance molecule. Our article demonstrates the importance of testing retinotopic models against as full a range of phenotypes as possible, and we have made available MATLAB software, we wrote to facilitate this process. © 2014 Wiley Periodicals, Inc. Develop Neurobiol 75: 641–666, 2015 PMID:25367067

  20. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    SciTech Connect

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  1. Quantitative analysis of numerical solvers for oscillatory biomolecular system models

    PubMed Central

    Quo, Chang F; Wang, May D

    2008-01-01

    Background This article provides guidelines for selecting optimal numerical solvers for biomolecular system models. Because various parameters of the same system could have drastically different ranges from 10-15 to 1010, the ODEs can be stiff and ill-conditioned, resulting in non-unique, non-existing, or non-reproducible modeling solutions. Previous studies have not examined in depth how to best select numerical solvers for biomolecular system models, which makes it difficult to experimentally validate the modeling results. To address this problem, we have chosen one of the well-known stiff initial value problems with limit cycle behavior as a test-bed system model. Solving this model, we have illustrated that different answers may result from different numerical solvers. We use MATLAB numerical solvers because they are optimized and widely used by the modeling community. We have also conducted a systematic study of numerical solver performances by using qualitative and quantitative measures such as convergence, accuracy, and computational cost (i.e. in terms of function evaluation, partial derivative, LU decomposition, and "take-off" points). The results show that the modeling solutions can be drastically different using different numerical solvers. Thus, it is important to intelligently select numerical solvers when solving biomolecular system models. Results The classic Belousov-Zhabotinskii (BZ) reaction is described by the Oregonator model and is used as a case study. We report two guidelines in selecting optimal numerical solver(s) for stiff, complex oscillatory systems: (i) for problems with unknown parameters, ode45 is the optimal choice regardless of the relative error tolerance; (ii) for known stiff problems, both ode113 and ode15s are good choices under strict relative tolerance conditions. Conclusions For any given biomolecular model, by building a library of numerical solvers with quantitative performance assessment metric, we show that it is possible

  2. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals

    EPA Science Inventory

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This manuscript reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, inver...

  3. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    PubMed

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator. PMID:26451831

  4. Fuzzy Logic as a Computational Tool for Quantitative Modelling of Biological Systems with Uncertain Kinetic Data.

    PubMed

    Bordon, Jure; Moskon, Miha; Zimic, Nikolaj; Mraz, Miha

    2015-01-01

    Quantitative modelling of biological systems has become an indispensable computational approach in the design of novel and analysis of existing biological systems. However, kinetic data that describe the system's dynamics need to be known in order to obtain relevant results with the conventional modelling techniques. These data are often hard or even impossible to obtain. Here, we present a quantitative fuzzy logic modelling approach that is able to cope with unknown kinetic data and thus produce relevant results even though kinetic data are incomplete or only vaguely defined. Moreover, the approach can be used in the combination with the existing state-of-the-art quantitative modelling techniques only in certain parts of the system, i.e., where kinetic data are missing. The case study of the approach proposed here is performed on the model of three-gene repressilator.

  5. Existing Soil Carbon Models Do Not Apply to Forested Wetlands.

    SciTech Connect

    Trettin, C C; Song, B; Jurgensen, M F; Li, C

    2001-09-14

    Evaluation of 12 widely used soil carbon models to determine applicability to wetland ecosystems. For any land area that includes wetlands, none of the individual models would produce reasonable simulations based on soil processes. Study presents a wetland soil carbon model framework based on desired attributes, the DNDC model and components of the CENTURY and WMEM models. Proposed synthesis would be appropriate when considering soil carbon dynamics at multiple spatial scales and where the land area considered includes both wetland and upland ecosystems.

  6. Training of Existing Workers: Issues, Incentives and Models

    ERIC Educational Resources Information Center

    Mawer, Giselle; Jackson, Elaine

    2005-01-01

    This report presents issues associated with incentives for training existing workers in small to medium-sized firms, identified through a small sample of case studies from the retail, manufacturing, and building and construction industries. While the majority of employers recognise workforce skill levels are fundamental to the success of the…

  7. Global existence of solutions for a model Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Cercignani, C.

    1987-12-01

    A model recently introduced by Ianiro and Lebowitz is shown to have a global solution for initial data having a finite H-functional and belonging to L {/υ 1}( L {/x ∞}). Methods previously introduced by Tartar to deal with discrete velocity models are used.

  8. Global existence of solutions for a model Boltzmann equation

    SciTech Connect

    Cercignani, C.

    1987-12-01

    A model recently introduced by Ianiro and Lebowitz is shown to have a global solution for initial data having a finite H-functional and belonging to L/sub v//sup 1/ (L/sub x//sup infinity/). Methods previously introduced by Tartar to deal with discrete velocity models are used.

  9. A Quantitative Software Risk Assessment Model

    NASA Technical Reports Server (NTRS)

    Lee, Alice

    2002-01-01

    This slide presentation reviews a risk assessment model as applied to software development. the presentation uses graphs to demonstrate basic concepts of software reliability. It also discusses the application to the risk model to the software development life cycle.

  10. Mathematical Existence Results for the Doi-Edwards Polymer Model

    NASA Astrophysics Data System (ADS)

    Chupin, Laurent

    2016-07-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  11. What Are We Doing When We Translate from Quantitative Models?

    PubMed Central

    Critchfield, Thomas S; Reed, Derek D

    2009-01-01

    Although quantitative analysis (in which behavior principles are defined in terms of equations) has become common in basic behavior analysis, translational efforts often examine everyday events through the lens of narrative versions of laboratory-derived principles. This approach to translation, although useful, is incomplete because equations may convey concepts that are difficult to capture in words. To support this point, we provide a nontechnical introduction to selected aspects of quantitative analysis; consider some issues that translational investigators (and, potentially, practitioners) confront when attempting to translate from quantitative models; and discuss examples of relevant translational studies. We conclude that, where behavior-science translation is concerned, the quantitative features of quantitative models cannot be ignored without sacrificing conceptual precision, scientific and practical insights, and the capacity of the basic and applied wings of behavior analysis to communicate effectively. PMID:22478533

  12. A Transformative Model for Undergraduate Quantitative Biology Education

    ERIC Educational Resources Information Center

    Usher, David C.; Driscoll, Tobin A.; Dhurjati, Prasad; Pelesko, John A.; Rossi, Louis F.; Schleiniger, Gilberto; Pusecker, Kathleen; White, Harold B.

    2010-01-01

    The "BIO2010" report recommended that students in the life sciences receive a more rigorous education in mathematics and physical sciences. The University of Delaware approached this problem by (1) developing a bio-calculus section of a standard calculus course, (2) embedding quantitative activities into existing biology courses, and (3) creating…

  13. Determining if Instructional Delivery Model Differences Exist in Remedial English

    ERIC Educational Resources Information Center

    Carter, LaTanya Woods

    2012-01-01

    The purpose of this causal comparative study is to test the theory of no significant difference that compares pre- and post-test assessment scores, controlling for the instructional delivery model of online and face-to-face students at a Mid-Atlantic university. Online education and virtual distance learning programs have increased in popularity…

  14. Why Do We Exist? Forming a Model for Vocational Education.

    ERIC Educational Resources Information Center

    Gradwell, John; McWethy, David

    1997-01-01

    To expose students to a broad range of careers, this seven-step model of a successful vocational education and training program includes basic skills, applied academics, further education, entrepreneurship, career exploration, student employment, and partnerships. Provides examples of what is being done in Germany, Japan, and Denmark. (JOW)

  15. Exploring Higher Education Business Models ("If Such a Thing Exists")

    ERIC Educational Resources Information Center

    Harney, John O.

    2013-01-01

    The global economic recession has caused students, parents, and policymakers to reevaluate personal and societal investments in higher education--and has prompted the realization that traditional higher ed "business models" may be unsustainable. Predicting a shakeout, most presidents expressed confidence for their own school's ability to…

  16. Towards a quantitative model of the post-synaptic proteome.

    PubMed

    Sorokina, Oksana; Sorokin, Anatoly; Armstrong, J Douglas

    2011-10-01

    The postsynaptic compartment of the excitatory glutamatergic synapse contains hundreds of distinct polypeptides with a wide range of functions (signalling, trafficking, cell-adhesion, etc.). Structural dynamics in the post-synaptic density (PSD) are believed to underpin cognitive processes. Although functionally and morphologically diverse, PSD proteins are generally enriched with specific domains, which precisely define the mode of clustering essential for signal processing. We applied a stochastic calculus of domain binding provided by a rule-based modelling approach to formalise the highly combinatorial signalling pathway in the PSD and perform the numerical analysis of the relative distribution of protein complexes and their sizes. We specified the combinatorics of protein interactions in the PSD by rules, taking into account protein domain structure, specific domain affinity and relative protein availability. With this model we interrogated the critical conditions for the protein aggregation into large complexes and distribution of both size and composition. The presented approach extends existing qualitative protein-protein interaction maps by considering the quantitative information for stoichiometry and binding properties for the elements of the network. This results in a more realistic view of the postsynaptic proteome at the molecular level. PMID:21874189

  17. Quantitative Model of the Cerro Prieto Field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-01-21

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  18. Quantitative model of the Cerro Prieto field

    SciTech Connect

    Halfman, S.E.; Lippmann, M.J.; Bodvarsson, G.S.

    1986-03-01

    A three-dimensional model of the Cerro Prieto geothermal field, Mexico, is under development. It is based on an updated version of LBL's hydrogeologic model of the field. It takes into account major faults and their effects on fluid and heat flow in the system. First, the field under natural state conditions is modeled. The results of this model match reasonably well observed pressure and temperature distributions. Then, a preliminary simulation of the early exploitation of the field is performed. The results show that the fluid in Cerro Prieto under natural state conditions moves primarily from east to west, rising along a major normal fault (Fault H). Horizontal fluid and heat flow occurs in a shallower region in the western part of the field due to the presence of permeable intergranular layers. Estimates of permeabilities in major aquifers are obtained, and the strength of the heat source feeding the hydrothermal system is determined.

  19. Modeling with Young Students--Quantitative and Qualitative.

    ERIC Educational Resources Information Center

    Bliss, Joan; Ogborn, Jon; Boohan, Richard; Brosnan, Tim; Mellar, Harvey; Sakonidis, Babis

    1999-01-01

    A project created tasks and tools to investigate quality and nature of 11- to 14-year-old pupils' reasoning with quantitative and qualitative computer-based modeling tools. Tasks and tools were used in two innovative modes of learning: expressive, where pupils created their own models, and exploratory, where pupils investigated an expert's model.…

  20. A quantitative model for hyaline membrane disease.

    PubMed

    Rojas, J; Green, R S; Fannon, L; Olssen, T; Lindstrom, D P; Stahlman, M T; Cotton, R B

    1982-01-01

    A model based on the course of 31 infants with uncomplicated hyaline membrane disease is described. Based on data collected over the first 12 hr of life, it predicts the course of an infant for the next 60 hr, and estimates the outcome in terms of length of oxygen requirement and assisted ventilation. For the construction of the model, right-to-left intra- and extra-pulmonary shunting, expressed as venous admixture, was considered as the principal mechanism of hypoxemia in hyaline membrane disease and mean applied proximal airway pressure was used to quantify management. The model provides an objective estimate of severity early in the course of disease, uses variables routinely available in an intensive care unit, and its use would strengthen the interpretation of clinical studies in which the comparability of experimental and control groups is critical. PMID:7070873

  1. Quantitative coronary angiography with deformable spline models.

    PubMed

    Klein, A K; Lee, F; Amini, A A

    1997-10-01

    Although current edge-following schemes can be very efficient in determining coronary boundaries, they may fail when the feature to be followed is disconnected (and the scheme is unable to bridge the discontinuity) or branch points exist where the best path to follow is indeterminate. In this paper, we present new deformable spline algorithms for determining vessel boundaries, and enhancing their centerline features. A bank of even and odd S-Gabor filter pairs of different orientations are convolved with vascular images in order to create an external snake energy field. Each filter pair will give maximum response to the segment of vessel having the same orientation as the filters. The resulting responses across filters of different orientations are combined to create an external energy field for snake optimization. Vessels are represented by B-Spline snakes, and are optimized on filter outputs with dynamic programming. The points of minimal constriction and the percent-diameter stenosis are determined from a computed vessel centerline. The system has been statistically validated using fixed stenosis and flexible-tube phantoms. It has also been validated on 20 coronary lesions with two independent operators, and has been tested for interoperator and intraoperator variability and reproducibility. The system has been found to be specially robust in complex images involving vessel branchings and incomplete contrast filling.

  2. The Impact of School Climate on Student Achievement in the Middle Schools of the Commonwealth of Virginia: A Quantitative Analysis of Existing Data

    ERIC Educational Resources Information Center

    Bergren, David Alexander

    2014-01-01

    This quantitative study was designed to be an analysis of the relationship between school climate and student achievement through the creation of an index of climate-factors (SES, discipline, attendance, and school size) for which publicly available data existed. The index that was formed served as a proxy measure of climate; it was analyzed…

  3. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  4. Steps toward quantitative infrasound propagation modeling

    NASA Astrophysics Data System (ADS)

    Waxler, Roger; Assink, Jelle; Lalande, Jean-Marie; Velea, Doru

    2016-04-01

    Realistic propagation modeling requires propagation models capable of incorporating the relevant physical phenomena as well as sufficiently accurate atmospheric specifications. The wind speed and temperature gradients in the atmosphere provide multiple ducts in which low frequency sound, infrasound, can propagate efficiently. The winds in the atmosphere are quite variable, both temporally and spatially, causing the sound ducts to fluctuate. For ground to ground propagation the ducts can be borderline in that small perturbations can create or destroy a duct. In such cases the signal propagation is very sensitive to fluctuations in the wind, often producing highly dispersed signals. The accuracy of atmospheric specifications is constantly improving as sounding technology develops. There is, however, a disconnect between sound propagation and atmospheric specification in that atmospheric specifications are necessarily statistical in nature while sound propagates through a particular atmospheric state. In addition infrasonic signals can travel to great altitudes, on the order of 120 km, before refracting back to earth. At such altitudes the atmosphere becomes quite rare causing sound propagation to become highly non-linear and attenuating. Approaches to these problems will be presented.

  5. Refining the quantitative pathway of the Pathways to Mathematics model.

    PubMed

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task.

  6. Refining the quantitative pathway of the Pathways to Mathematics model.

    PubMed

    Sowinski, Carla; LeFevre, Jo-Anne; Skwarchuk, Sheri-Lynn; Kamawar, Deepthi; Bisanz, Jeffrey; Smith-Chant, Brenda

    2015-03-01

    In the current study, we adopted the Pathways to Mathematics model of LeFevre et al. (2010). In this model, there are three cognitive domains--labeled as the quantitative, linguistic, and working memory pathways--that make unique contributions to children's mathematical development. We attempted to refine the quantitative pathway by combining children's (N=141 in Grades 2 and 3) subitizing, counting, and symbolic magnitude comparison skills using principal components analysis. The quantitative pathway was examined in relation to dependent numerical measures (backward counting, arithmetic fluency, calculation, and number system knowledge) and a dependent reading measure, while simultaneously accounting for linguistic and working memory skills. Analyses controlled for processing speed, parental education, and gender. We hypothesized that the quantitative, linguistic, and working memory pathways would account for unique variance in the numerical outcomes; this was the case for backward counting and arithmetic fluency. However, only the quantitative and linguistic pathways (not working memory) accounted for unique variance in calculation and number system knowledge. Not surprisingly, only the linguistic pathway accounted for unique variance in the reading measure. These findings suggest that the relative contributions of quantitative, linguistic, and working memory skills vary depending on the specific cognitive task. PMID:25521665

  7. Stoffenmanager exposure model: development of a quantitative algorithm.

    PubMed

    Tielemans, Erik; Noy, Dook; Schinkel, Jody; Heussen, Henri; Van Der Schaaf, Doeke; West, John; Fransman, Wouter

    2008-08-01

    In The Netherlands, the web-based tool called 'Stoffenmanager' was initially developed to assist small- and medium-sized enterprises to prioritize and control risks of handling chemical products in their workplaces. The aim of the present study was to explore the accuracy of the Stoffenmanager exposure algorithm. This was done by comparing its semi-quantitative exposure rankings for specific substances with exposure measurements collected from several occupational settings to derive a quantitative exposure algorithm. Exposure data were collected using two strategies. First, we conducted seven surveys specifically for validation of the Stoffenmanager. Second, existing occupational exposure data sets were collected from various sources. This resulted in 378 and 320 measurements for solid and liquid scenarios, respectively. The Spearman correlation coefficients between Stoffenmanager scores and exposure measurements appeared to be good for handling solids (r(s) = 0.80, N = 378, P < 0.0001) and liquid scenarios (r(s) = 0.83, N = 320, P < 0.0001). However, the correlation for liquid scenarios appeared to be lower when calculated separately for sets of volatile substances with a vapour pressure >10 Pa (r(s) = 0.56, N = 104, P < 0.0001) and non-volatile substances with a vapour pressure < or =10 Pa (r(s) = 0.53, N = 216, P < 0.0001). The mixed-effect regression models with natural log-transformed Stoffenmanager scores as independent parameter explained a substantial part of the total exposure variability (52% for solid scenarios and 76% for liquid scenarios). Notwithstanding the good correlation, the data show substantial variability in exposure measurements given a certain Stoffenmanager score. The overall performance increases our confidence in the use of the Stoffenmanager as a generic tool for risk assessment. The mixed-effect regression models presented in this paper may be used for assessment of so-called reasonable worst case exposures. This evaluation is

  8. New Quantitative Structure-Activity Relationship Models Improve Predictability of Ames Mutagenicity for Aromatic Azo Compounds.

    PubMed

    Manganelli, Serena; Benfenati, Emilio; Manganaro, Alberto; Kulkarni, Sunil; Barton-Maclaren, Tara S; Honma, Masamitsu

    2016-10-01

    Existing Quantitative Structure-Activity Relationship (QSAR) models have limited predictive capabilities for aromatic azo compounds. In this study, 2 new models were built to predict Ames mutagenicity of this class of compounds. The first one made use of descriptors based on simplified molecular input-line entry system (SMILES), calculated with the CORAL software. The second model was based on the k-nearest neighbors algorithm. The statistical quality of the predictions from single models was satisfactory. The performance further improved when the predictions from these models were combined. The prediction results from other QSAR models for mutagenicity were also evaluated. Most of the existing models were found to be good at finding toxic compounds but resulted in many false positive predictions. The 2 new models specific for this class of compounds avoid this problem thanks to a larger set of related compounds as training set and improved algorithms.

  9. New Quantitative Structure-Activity Relationship Models Improve Predictability of Ames Mutagenicity for Aromatic Azo Compounds.

    PubMed

    Manganelli, Serena; Benfenati, Emilio; Manganaro, Alberto; Kulkarni, Sunil; Barton-Maclaren, Tara S; Honma, Masamitsu

    2016-10-01

    Existing Quantitative Structure-Activity Relationship (QSAR) models have limited predictive capabilities for aromatic azo compounds. In this study, 2 new models were built to predict Ames mutagenicity of this class of compounds. The first one made use of descriptors based on simplified molecular input-line entry system (SMILES), calculated with the CORAL software. The second model was based on the k-nearest neighbors algorithm. The statistical quality of the predictions from single models was satisfactory. The performance further improved when the predictions from these models were combined. The prediction results from other QSAR models for mutagenicity were also evaluated. Most of the existing models were found to be good at finding toxic compounds but resulted in many false positive predictions. The 2 new models specific for this class of compounds avoid this problem thanks to a larger set of related compounds as training set and improved algorithms. PMID:27413112

  10. A GPGPU accelerated modeling environment for quantitatively characterizing karst systems

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Covington, M. D.; Luhmann, A. J.; Saar, M. O.

    2011-12-01

    The ability to derive quantitative information on the geometry of karst aquifer systems is highly desirable. Knowing the geometric makeup of a karst aquifer system enables quantitative characterization of the systems response to hydraulic events. However, the relationship between flow path geometry and karst aquifer response is not well understood. One method to improve this understanding is the use of high speed modeling environments. High speed modeling environments offer great potential in this regard as they allow researchers to improve their understanding of the modeled karst aquifer through fast quantitative characterization. To that end, we have implemented a finite difference model using General Purpose Graphics Processing Units (GPGPUs). GPGPUs are special purpose accelerators which are capable of high speed and highly parallel computation. The GPGPU architecture is a grid like structure, making it is a natural fit for structured systems like finite difference models. To characterize the highly complex nature of karst aquifer systems our modeling environment is designed to use an inverse method to conduct the parameter tuning. Using an inverse method reduces the total amount of parameter space needed to produce a set of parameters describing a system of good fit. Systems of good fit are determined with a comparison to reference storm responses. To obtain reference storm responses we have collected data from a series of data-loggers measuring water depth, temperature, and conductivity at locations along a cave stream with a known geometry in southeastern Minnesota. By comparing the modeled response to those of the reference responses the model parameters can be tuned to quantitatively characterize geometry, and thus, the response of the karst system.

  11. Quantitative 23Na magnetic resonance imaging of model foods.

    PubMed

    Veliyulin, Emil; Egelandsdal, Bjørg; Marica, Florin; Balcom, Bruce J

    2009-05-27

    Partial (23)Na MRI invisibility in muscle foods is often referred to as an inherent drawback of the MRI technique, impairing quantitative sodium analysis. Several model samples were designed to simulate muscle foods with a broad variation in protein, fat, moisture, and salt content. (23)Na spin-echo MRI and a recently developed (23)Na SPRITE MRI approach were compared for quantitative sodium imaging, demonstrating the possibility of accurate quantitative (23)Na MRI by the latter method. Good correlations with chemically determined standards were also obtained from bulk (23)Na free induction decay (FID) and CPMG relaxation experiments on the same sample set, indicating their potential use for rapid bulk NaCl measurements. Thus, the sodium MRI invisibility is a methodological problem that can easily be circumvented by using the SPRITE MRI technique. PMID:21314196

  12. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology

    PubMed Central

    Bachmann, Julie; Matteson, Andrew; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D.; Theis, Fabian J.; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642

  13. Lessons learned from quantitative dynamical modeling in systems biology.

    PubMed

    Raue, Andreas; Schilling, Marcel; Bachmann, Julie; Matteson, Andrew; Schelker, Max; Schelke, Max; Kaschek, Daniel; Hug, Sabine; Kreutz, Clemens; Harms, Brian D; Theis, Fabian J; Klingmüller, Ursula; Timmer, Jens

    2013-01-01

    Due to the high complexity of biological data it is difficult to disentangle cellular processes relying only on intuitive interpretation of measurements. A Systems Biology approach that combines quantitative experimental data with dynamic mathematical modeling promises to yield deeper insights into these processes. Nevertheless, with growing complexity and increasing amount of quantitative experimental data, building realistic and reliable mathematical models can become a challenging task: the quality of experimental data has to be assessed objectively, unknown model parameters need to be estimated from the experimental data, and numerical calculations need to be precise and efficient. Here, we discuss, compare and characterize the performance of computational methods throughout the process of quantitative dynamic modeling using two previously established examples, for which quantitative, dose- and time-resolved experimental data are available. In particular, we present an approach that allows to determine the quality of experimental data in an efficient, objective and automated manner. Using this approach data generated by different measurement techniques and even in single replicates can be reliably used for mathematical modeling. For the estimation of unknown model parameters, the performance of different optimization algorithms was compared systematically. Our results show that deterministic derivative-based optimization employing the sensitivity equations in combination with a multi-start strategy based on latin hypercube sampling outperforms the other methods by orders of magnitude in accuracy and speed. Finally, we investigated transformations that yield a more efficient parameterization of the model and therefore lead to a further enhancement in optimization performance. We provide a freely available open source software package that implements the algorithms and examples compared here. PMID:24098642

  14. Quantitative and logic modelling of gene and molecular networks

    PubMed Central

    Le Novère, Nicolas

    2015-01-01

    Behaviours of complex biomolecular systems are often irreducible to the elementary properties of their individual components. Explanatory and predictive mathematical models are therefore useful for fully understanding and precisely engineering cellular functions. The development and analyses of these models require their adaptation to the problems that need to be solved and the type and amount of available genetic or molecular data. Quantitative and logic modelling are among the main methods currently used to model molecular and gene networks. Each approach comes with inherent advantages and weaknesses. Recent developments show that hybrid approaches will become essential for further progress in synthetic biology and in the development of virtual organisms. PMID:25645874

  15. Existence of almost periodic solution of a model of phytoplankton allelopathy with delay

    NASA Astrophysics Data System (ADS)

    Abbas, Syed; Mahto, Lakshman

    2012-09-01

    In this paper we discuss a non-autonomous two species competitive allelopathic phytoplankton model in which both species are producing chemical which stimulate the growth of each other. We have studied the existence and uniqueness of an almost periodic solution for the concerned model system. Sufficient conditions are derived for the existence of a unique almost periodic solution.

  16. Existence of periodic solutions in a model of respiratory syncytial virus RSV

    NASA Astrophysics Data System (ADS)

    Arenas, Abraham J.; González, Gilberto; Jódar, Lucas

    2008-08-01

    In this paper we study the existence of a positive periodic solutions for nested models of respiratory syncytial virus RSV, by using a continuation theorem based on coincidence degree theory. Conditions for the existence of periodic solutions in the model are given. Numerical simulations related to the transmission of respiratory syncytial virus in Madrid and Rio Janeiro are included.

  17. An evaluation of recent quantitative magnetospheric magnetic field models

    NASA Technical Reports Server (NTRS)

    Walker, R. J.

    1976-01-01

    Magnetospheric field models involving dipole tilt effects are discussed, with particular reference to defined magnetopause models and boundary surface models. The models are compared with observations and with each other whenever possible. It is shown that models containing only contributions from magnetopause and tail current systems are capable of reproducing the observed quiet time field just in a qualitative way. The best quantitative agreement between models and observations take place when currents distributed in the inner magnetosphere are added to the magnetopause and tail current systems. One region in which all the models fall short is the region around the polar cusp. Obtaining physically reasonable gradients should have high priority in the development of future models.

  18. Transgenic models of Alzheimer's disease: better utilization of existing models through viral transgenesis.

    PubMed

    Platt, Thomas L; Reeves, Valerie L; Murphy, M Paul

    2013-09-01

    Animal models have been used for decades in the Alzheimer's disease (AD) research field and have been crucial for the advancement of our understanding of the disease. Most models are based on familial AD mutations of genes involved in the amyloidogenic process, such as the amyloid precursor protein (APP) and presenilin 1 (PS1). Some models also incorporate mutations in tau (MAPT) known to cause frontotemporal dementia, a neurodegenerative disease that shares some elements of neuropathology with AD. While these models are complex, they fail to display pathology that perfectly recapitulates that of the human disease. Unfortunately, this level of pre-existing complexity creates a barrier to the further modification and improvement of these models. However, as the efficacy and safety of viral vectors improves, their use as an alternative to germline genetic modification is becoming a widely used research tool. In this review we discuss how this approach can be used to better utilize common mouse models in AD research. This article is part of a Special Issue entitled: Animal Models of Disease.

  19. Cross-bridge model of muscle contraction. Quantitative analysis.

    PubMed Central

    Eisenberg, E; Hill, T L; Chen, Y

    1980-01-01

    We recently presented, in a qualitative manner, a cross-bridge model of muscle contraction which was based on a biochemical kinetic cycle for the actomyosin ATPase activity. This cross-bridge model consisted of two cross-bridge states detached from actin and two cross-bridge states attached to actin. In the present paper, we attempt to fit this model quantitatively to both biochemical and physiological data. We find that the resulting complete cross-bridge model is able to account reasonably well for both the isometric transient data observed when a muscle is subjected to a sudden change in length and for the relationship between the velocity of muscle contraction in vivo and the actomyosin ATPase activity in vitro. This model also illustrates the interrelationship between biochemical and physiological data necessary for the development of a complete cross-bridge model of muscle contraction. PMID:6455168

  20. Reconstruction of Existing Reservoir Model for Its Calibration to Dynamic Data

    NASA Astrophysics Data System (ADS)

    Le Ravalec-Dupin, M.; Hu, L. Y.; Roggero, F.

    The increase in computer power and the recent developments in history-matching can motivate the reexamination of previously built reservoir models. To save the time of engineers and the CPU time, four distinct algorithms, which allow for rebuilding an existing reservoir model without restarting the reservoir study from scratch, were formulated. The algorithms involve techniques such as optimization, relaxation, Wiener filtering, or sequential reconstruction. They are used to identify a stochastic function and a set of random numbers. Given the stochastic function, the random numbers yield a realization that is close to the existing reservoir model. Once the random numbers are known, the existing reservoir model can be submitted to a new history-matching process to improve the data fit or to account for newly collected data. A practical implementation is presented within the context of facies reservoirs. This article focuses on a previously built facies reservoir model. Although the simulation procedure is unknown to the authors, a set of random numbers are identified so that when provided to a multiple-point statistics simulator, a realization very close to the existing reservoir model is obtained. A new history-matching procedure is then run to update the existing reservoir model and to integrate the fractional flow rates measured in two producing wells drilled after the building of the existing reservoir model.

  1. Quantitative magnetospheric models derived from spacecraft magnetometer data

    NASA Technical Reports Server (NTRS)

    Mead, G. D.; Fairfield, D. H.

    1973-01-01

    Quantitative models of the external magnetospheric field were derived by making least-squares fits to magnetic field measurements from four IMP satellites. The data were fit to a power series expansion in the solar magnetic coordinates and the solar wind-dipole tilt angle, and thus the models contain the effects of seasonal north-south asymmetries. The expansions are divergence-free, but unlike the usual scalar potential expansions, the models contain a nonzero curl representing currents distributed within the magnetosphere. Characteristics of four models are presented, representing different degrees of magnetic disturbance as determined by the range of Kp values. The latitude at the earth separating open polar cap field lines from field lines closing on the dayside is about 5 deg lower than that determined by previous theoretically-derived models. At times of high Kp, additional high latitude field lines are drawn back into the tail.

  2. The conceptual approach to quantitative modeling of guard cells.

    PubMed

    Blatt, Michael R; Hills, Adrian; Chen, Zhong-Hua; Wang, Yizhou; Papanatsiou, Maria; Lew, Vigilio L

    2013-01-01

    Much of the 70% of global water usage associated with agriculture passes through stomatal pores of plant leaves. The guard cells, which regulate these pores, thus have a profound influence on photosynthetic carbon assimilation and water use efficiency of plants. We recently demonstrated how quantitative mathematical modeling of guard cells with the OnGuard modeling software yields detail sufficient to guide phenotypic and mutational analysis. This advance represents an all-important step toward applications in directing "reverse-engineering" of guard cell function for improved water use efficiency and carbon assimilation. OnGuard is nonetheless challenging for those unfamiliar with a modeler's way of thinking. In practice, each model construct represents a hypothesis under test, to be discarded, validated or refined by comparisons between model predictions and experimental results. The few guidelines set out here summarize the standard and logical starting points for users of the OnGuard software.

  3. Quantitative comparison between crowd models for evacuation planning and evaluation

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vaisagh; Lee, Chong Eu; Lees, Michael Harold; Cheong, Siew Ann; Sloot, Peter M. A.

    2014-02-01

    Crowd simulation is rapidly becoming a standard tool for evacuation planning and evaluation. However, the many crowd models in the literature are structurally different, and few have been rigorously calibrated against real-world egress data, especially in emergency situations. In this paper we describe a procedure to quantitatively compare different crowd models or between models and real-world data. We simulated three models: (1) the lattice gas model, (2) the social force model, and (3) the RVO2 model, and obtained the distributions of six observables: (1) evacuation time, (2) zoned evacuation time, (3) passage density, (4) total distance traveled, (5) inconvenience, and (6) flow rate. We then used the DISTATIS procedure to compute the compromise matrix of statistical distances between the three models. Projecting the three models onto the first two principal components of the compromise matrix, we find the lattice gas and RVO2 models are similar in terms of the evacuation time, passage density, and flow rates, whereas the social force and RVO2 models are similar in terms of the total distance traveled. Most importantly, we find that the zoned evacuation times of the three models to be very different from each other. Thus we propose to use this variable, if it can be measured, as the key test between different models, and also between models and the real world. Finally, we compared the model flow rates against the flow rate of an emergency evacuation during the May 2008 Sichuan earthquake, and found the social force model agrees best with this real data.

  4. Functional Linear Models for Association Analysis of Quantitative Traits

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Mills, James L.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao

    2014-01-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. PMID:24130119

  5. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study.

  6. Three models intercomparison for Quantitative Precipitation Forecast over Calabria

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Lavagnini, A.; Accadia, C.; Mariani, S.; Casaioli, M.

    2004-11-01

    In the framework of the National Project “Sviluppo di distretti industriali per le Osservazioni della Terra” (Development of Industrial Districts for Earth Observations) funded by MIUR (Ministero dell'Università e della Ricerca Scientifica --Italian Ministry of the University and Scientific Research) two operational mesoscale models were set-up for Calabria, the southernmost tip of the Italian peninsula. Models are RAMS (Regional Atmospheric Modeling System) and MM5 (Mesoscale Modeling 5) that are run every day at Crati scrl to produce weather forecast over Calabria (http://www.crati.it). This paper reports model intercomparison for Quantitative Precipitation Forecast evaluated for a 20 month period from 1th October 2000 to 31th May 2002. In addition to RAMS and MM5 outputs, QBOLAM rainfall fields are available for the period selected and included in the comparison. This model runs operationally at “Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici”. Forecasts are verified comparing models outputs with raingauge data recorded by the regional meteorological network, which has 75 raingauges. Large-scale forcing is the same for all models considered and differences are due to physical/numerical parameterizations and horizontal resolutions. QPFs show differences between models. Largest differences are for BIA compared to the other considered scores. Performances decrease with increasing forecast time for RAMS and MM5, whilst QBOLAM scores better for second day forecast.

  7. Quantitative phenomenological model of the BOLD contrast mechanism

    NASA Astrophysics Data System (ADS)

    Dickson, John D.; Ash, Tom W. J.; Williams, Guy B.; Sukstanskii, Alexander L.; Ansorge, Richard E.; Yablonskiy, Dmitriy A.

    2011-09-01

    Different theoretical models of the BOLD contrast mechanism are used for many applications including BOLD quantification (qBOLD) and vessel size imaging, both in health and disease. Each model simplifies the system under consideration, making approximations about the structure of the blood vessel network and diffusion of water molecules through inhomogeneities in the magnetic field created by deoxyhemoglobin-containing blood vessels. In this study, Monte-Carlo methods are used to simulate the BOLD MR signal generated by diffusing water molecules in the presence of long, cylindrical blood vessels. Using these simulations we introduce a new, phenomenological model that is far more accurate over a range of blood oxygenation levels and blood vessel radii than existing models. This model could be used to extract physiological parameters of the blood vessel network from experimental data in BOLD-based experiments. We use our model to establish ranges of validity for the existing analytical models of Yablonskiy and Haacke, Kiselev and Posse, Sukstanskii and Yablonskiy (extended to the case of arbitrary time in the spin echo sequence) and Bauer et al. (extended to the case of randomly oriented cylinders). Although these models are shown to be accurate in the limits of diffusion under which they were derived, none of them is accurate for the whole physiological range of blood vessels radii and blood oxygenation levels. We also show the extent of systematic errors that are introduced due to the approximations of these models when used for BOLD signal quantification.

  8. Models of quantitative estimations: rule-based and exemplar-based processes compared.

    PubMed

    von Helversen, Bettina; Rieskamp, Jörg

    2009-07-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model-the mapping model-that outperformed the exemplar model in a task thought to promote exemplar-based processing. This raised questions about the assumptions of rule-based versus exemplar-based models that underlie the notion of task contingency of cognitive processes. Rule-based models, such as the mapping model, assume the abstraction of explicit task knowledge. In contrast, exemplar models should profit if storage and activation of the exemplars is facilitated. Two studies tested the importance of the two models' assumptions. When knowledge about cues existed, the rule-based mapping model predicted quantitative estimations best. In contrast, when knowledge about the cues was difficult to gain, participants' estimations were best described by an exemplar model. The results emphasize the task contingency of cognitive processes. PMID:19586258

  9. Forces for Morphogenesis Investigated with Laser Microsurgery and Quantitative Modeling

    NASA Astrophysics Data System (ADS)

    Hutson, M. Shane; Tokutake, Yoichiro; Chang, Ming-Shien; Bloor, James W.; Venakides, Stephanos; Kiehart, Daniel P.; Edwards, Glenn S.

    2003-04-01

    We investigated the forces that connect the genetic program of development to morphogenesis in Drosophila. We focused on dorsal closure, a powerful model system for development and wound healing. We found that the bulk of progress toward closure is driven by contractility in supracellular ``purse strings'' and in the amnioserosa, whereas adhesion-mediated zipping coordinates the forces produced by the purse strings and is essential only for the end stages. We applied quantitative modeling to show that these forces, generated in distinct cells, are coordinated in space and synchronized in time. Modeling of wild-type and mutant phenotypes is predictive; although closure in myospheroid mutants ultimately fails when the cell sheets rip themselves apart, our analysis indicates that βPS integrin has an earlier, important role in zipping.

  10. A Team Mental Model Perspective of Pre-Quantitative Risk

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.

    2011-01-01

    This study was conducted to better understand how teams conceptualize risk before it can be quantified, and the processes by which a team forms a shared mental model of this pre-quantitative risk. Using an extreme case, this study analyzes seven months of team meeting transcripts, covering the entire lifetime of the team. Through an analysis of team discussions, a rich and varied structural model of risk emerges that goes significantly beyond classical representations of risk as the product of a negative consequence and a probability. In addition to those two fundamental components, the team conceptualization includes the ability to influence outcomes and probabilities, networks of goals, interaction effects, and qualitative judgments about the acceptability of risk, all affected by associated uncertainties. In moving from individual to team mental models, team members employ a number of strategies to gain group recognition of risks and to resolve or accept differences.

  11. Quantitative modeling of soil sorption for xenobiotic chemicals.

    PubMed Central

    Sabljić, A

    1989-01-01

    Experimentally determining soil sorption behavior of xenobiotic chemicals during the last 10 years has been costly, time-consuming, and very tedious. Since an estimated 100,000 chemicals are currently in common use and new chemicals are registered at a rate of 1000 per year, it is obvious that our human and material resources are insufficient to experimentally obtain their soil sorption data. Much work is being done to find alternative methods that will enable us to accurately and rapidly estimate the soil sorption coefficients of pesticides and other classes of organic pollutants. Empirical models, based on water solubility and n-octanol/water partition coefficients, have been proposed as alternative, accurate methods to estimate soil sorption coefficients. An analysis of the models has shown (a) low precision of water solubility and n-octanol/water partition data, (b) varieties of quantitative models describing the relationship between the soil sorption and above-mentioned properties, and (c) violations of some basic statistical laws when these quantitative models were developed. During the last 5 years considerable efforts were made to develop nonempirical models that are free of errors imminent to all models based on empirical variables. Thus far molecular topology has been shown to be the most successful structural property for describing and predicting soil sorption coefficients. The first-order molecular connectivity index was demonstrated to correlate extremely well with the soil sorption coefficients of polycyclic aromatic hydrocarbons (PAHs), alkylbenzenes, chlorobenzenes, chlorinated alkanes and alkenes, heterocyclic and heterosubstituted PAHs, and halogenated phenols. The average difference between predicted and observed soil sorption coefficients is only 0.2 on the logarithmic scale (corresponding to a factor of 1.5). A comparison of the molecular connectivity model with the empirical models described earlier shows that the former is superior in

  12. A quantitative model for integrating landscape evolution and soil formation

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, T.; Stockmann, U.; Minasny, B.; McBratney, Alex B.

    2013-06-01

    evolution is closely related to soil formation. Quantitative modeling of the dynamics of soils and landscapes should therefore be integrated. This paper presents a model, named Model for Integrated Landscape Evolution and Soil Development (MILESD), which describes the interaction between pedogenetic and geomorphic processes. This mechanistic model includes the most significant soil formation processes, ranging from weathering to clay translocation, and combines these with the lateral redistribution of soil particles through erosion and deposition. The model is spatially explicit and simulates the vertical variation in soil horizon depth as well as basic soil properties such as texture and organic matter content. In addition, sediment export and its properties are recorded. This model is applied to a 6.25 km2 area in the Werrikimbe National Park, Australia, simulating soil development over a period of 60,000 years. Comparison with field observations shows how the model accurately predicts trends in total soil thickness along a catena. Soil texture and bulk density are predicted reasonably well, with errors of the order of 10%, however, field observations show a much higher organic carbon content than predicted. At the landscape scale, different scenarios with varying erosion intensity result only in small changes of landscape-averaged soil thickness, while the response of the total organic carbon stored in the system is higher. Rates of sediment export show a highly nonlinear response to soil development stage and the presence of a threshold, corresponding to the depletion of the soil reservoir, beyond which sediment export drops significantly.

  13. Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code

    NASA Technical Reports Server (NTRS)

    Freeh, Josh

    2003-01-01

    Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.

  14. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  15. Quantitative analysis of fault slip evolution in analogue transpression models

    NASA Astrophysics Data System (ADS)

    Leever, Karen; Gabrielsen, Roy H.; Schmid, Dani; Braathen, Alvar

    2010-05-01

    A quantitative analysis of fault slip evolution in crustal scale brittle and brittle-ductile analogue models of doubly vergent transpressional wedges was performed by means of Particle Image Velocimetry (PIV). The kinematic analyses allow detailed comparison between model results and field kinematic data. This novel approach leads to better understanding of the evolution of transpressional orogens such as the Tertiary West Spitsbergen fold and thrust belt in particular and will advance the understanding of transpressional wedge mechanics in general. We ran a series of basal-driven models with convergence angles of 4, 7.5, 15 and 30 degrees. In these crustal scale models, brittle rheology was represented by quartz sand; in one model a viscous PDMS layer was included at shallow depth. Total sand pack thickness was 6cm, its extent 120x60cm. The PIV method was used to calculate a vector field from pairs of images that were recorded from the top of the experiments at a 2mm displacement increment. The slip azimuth on discrete faults was calculated and visualized by means of a directional derivative of this vector field. From this data set, several stages in the evolution of the models could be identified. The stages were defined by changes in the degree of displacement partitioning, i.e. slip along-strike and orthogonal to the plate boundary. A first stage of distributed strain (with no visible faults at the model surface) was followed by a shear lens stage with oblique displacement on pro- and retro-shear. The oblique displacement became locally partitioned during progressive displacement. During the final stage, strain was more fully partitioned between a newly formed central strike slip zone and reverse faults at the sides. Strain partitioning was best developed in the 15 degrees model, which shows near-reverse faults along both sides of the wedge in addition to strike slip displacement in the center. In further analysis we extracted average slip vectors for

  16. Quantitative modeling of ICRF antennas with integrated time domain RF sheath and plasma physics

    SciTech Connect

    Smithe, David N.; D'Ippolito, Daniel A.; Myra, James R.

    2014-02-12

    Significant efforts have been made to quantitatively benchmark the sheath sub-grid model used in our time-domain simulations of plasma-immersed antenna near fields, which includes highly detailed three-dimensional geometry, the presence of the slow wave, and the non-linear evolution of the sheath potential. We present both our quantitative benchmarking strategy, and results for the ITER antenna configuration, including detailed maps of electric field, and sheath potential along the entire antenna structure. Our method is based upon a time-domain linear plasma model, using the finite-difference electromagnetic Vorpal/Vsim software. This model has been augmented with a non-linear rf-sheath sub-grid model, which provides a self-consistent boundary condition for plasma current where it exists in proximity to metallic surfaces. Very early, this algorithm was designed and demonstrated to work on very complicated three-dimensional geometry, derived from CAD or other complex description of actual hardware, including ITER antennas. Initial work with the simulation model has also provided a confirmation of the existence of propagating slow waves in the low density edge region, which can significantly impact the strength of the rf-sheath potential, which is thought to contribute to impurity generation. Our sheath algorithm is based upon per-point lumped-circuit parameters for which we have estimates and general understanding, but which allow for some tuning and fitting. We are now engaged in a careful benchmarking of the algorithm against known analytic models and existing computational techniques to insure that the predictions of rf-sheath voltage are quantitatively consistent and believable, especially where slow waves share in the field with the fast wave. Currently in progress, an addition to the plasma force response accounting for the sheath potential, should enable the modeling of sheath plasma waves, a predicted additional root to the dispersion, existing at the

  17. Quantitative model of the growth of floodplains by vertical accretion

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2000-01-01

    A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.

  18. Model-based quantitative laser Doppler flowmetry in skin

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2010-09-01

    Laser Doppler flowmetry (LDF) can be used for assessing the microcirculatory perfusion. However, conventional LDF (cLDF) gives only a relative perfusion estimate for an unknown measurement volume, with no information about the blood flow speed distribution. To overcome these limitations, a model-based analysis method for quantitative LDF (qLDF) is proposed. The method uses inverse Monte Carlo technique with an adaptive three-layer skin model. By analyzing the optimal model where measured and simulated LDF spectra detected at two different source-detector separations match, the absolute microcirculatory perfusion for a specified speed region in a predefined volume is determined. qLDF displayed errors <12% when evaluated using simulations of physiologically relevant variations in the layer structure, in the optical properties of static tissue, and in blood absorption. Inhomogeneous models containing small blood vessels, hair, and sweat glands displayed errors <5%. Evaluation models containing single larger blood vessels displayed significant errors but could be dismissed by residual analysis. In vivo measurements using local heat provocation displayed a higher perfusion increase with qLDF than cLDF, due to nonlinear effects in the latter. The qLDF showed that the perfusion increase occurred due to an increased amount of red blood cells with a speed >1 mm/s.

  19. Discrete modeling of hydraulic fracturing processes in a complex pre-existing fracture network

    NASA Astrophysics Data System (ADS)

    Kim, K.; Rutqvist, J.; Nakagawa, S.; Houseworth, J. E.; Birkholzer, J. T.

    2015-12-01

    Hydraulic fracturing and stimulation of fracture networks are widely used by the energy industry (e.g., shale gas extraction, enhanced geothermal systems) to increase permeability of geological formations. Numerous analytical and numerical models have been developed to help understand and predict the behavior of hydraulically induced fractures. However, many existing models assume simple fracturing scenarios with highly idealized fracture geometries (e.g., propagation of a single fracture with assumed shapes in a homogeneous medium). Modeling hydraulic fracture propagation in the presence of natural fractures and homogeneities can be very challenging because of the complex interactions between fluid, rock matrix, and rock interfaces, as well as the interactions between propagating fractures and pre-existing natural fractures. In this study, the TOUGH-RBSN code for coupled hydro-mechanical modeling is utilized to simulate hydraulic fracture propagation and its interaction with pre-existing fracture networks. The simulation tool combines TOUGH2, a simulator of subsurface multiphase flow and mass transport based on the finite volume approach, with the implementation of a lattice modeling approach for geomechanical and fracture-damage behavior, named Rigid-Body-Spring Network (RBSN). The discrete fracture network (DFN) approach is facilitated in the Voronoi discretization via a fully automated modeling procedure. The numerical program is verified through a simple simulation for single fracture propagation, in which the resulting fracture geometry is compared to an analytical solution for given fracture length and aperture. Subsequently, predictive simulations are conducted for planned laboratory experiments using rock-analogue (soda-lime glass) samples containing a designed, pre-existing fracture network. The results of a preliminary simulation demonstrate selective fracturing and fluid infiltration along the pre-existing fractures, with additional fracturing in part

  20. A Pleiotropic Nonadditive Model of Variation in Quantitative Traits

    PubMed Central

    Caballero, A.; Keightley, P. D.

    1994-01-01

    A model of mutation-selection-drift balance incorporating pleiotropic and dominance effects of new mutations on quantitative traits and fitness is investigated and used to predict the amount and nature of genetic variation maintained in segregating populations. The model is based on recent information on the joint distribution of mutant effects on bristle traits and fitness in Drosophila melanogaster from experiments on the accumulation of spontaneous and P element-induced mutations. These experiments suggest a leptokurtic distribution of effects with an intermediate correlation between effects on the trait and fitness. Mutants of large effect tend to be partially recessive while those with smaller effect are on average additive, but apparently with very variable gene action. The model is parameterized with two different sets of information derived from P element insertion and spontaneous mutation data, though the latter are not fully known. They differ in the number of mutations per generation which is assumed to affect the trait. Predictions of the variance maintained for bristle number assuming parameters derived from effects of P element insertions, in which the proportion of mutations with an effect on the trait is small, fit reasonably well with experimental observations. The equilibrium genetic variance is nearly independent of the degree of dominance of new mutations. Heritabilities of between 0.4 and 0.6 are predicted with population sizes from 10(4) to 10(6), and most of the variance for the metric trait in segregating populations is due to a small proportion of mutations (about 1% of the total number) with neutral or nearly neutral effects on fitness and intermediate effects on the trait (0.1-0.5σ(P)). Much of the genetic variance is contributed by recessive or partially recessive mutants, but only a small proportion (about 10%) of the genetic variance is dominance variance. The amount of apparent selection on the trait itself generated by the model is

  1. Towards Quantitative Spatial Models of Seabed Sediment Composition

    PubMed Central

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom’s parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  2. Towards Quantitative Spatial Models of Seabed Sediment Composition.

    PubMed

    Stephens, David; Diesing, Markus

    2015-01-01

    There is a need for fit-for-purpose maps for accurately depicting the types of seabed substrate and habitat and the properties of the seabed for the benefits of research, resource management, conservation and spatial planning. The aim of this study is to determine whether it is possible to predict substrate composition across a large area of seabed using legacy grain-size data and environmental predictors. The study area includes the North Sea up to approximately 58.44°N and the United Kingdom's parts of the English Channel and the Celtic Seas. The analysis combines outputs from hydrodynamic models as well as optical remote sensing data from satellite platforms and bathymetric variables, which are mainly derived from acoustic remote sensing. We build a statistical regression model to make quantitative predictions of sediment composition (fractions of mud, sand and gravel) using the random forest algorithm. The compositional data is analysed on the additive log-ratio scale. An independent test set indicates that approximately 66% and 71% of the variability of the two log-ratio variables are explained by the predictive models. A EUNIS substrate model, derived from the predicted sediment composition, achieved an overall accuracy of 83% and a kappa coefficient of 0.60. We demonstrate that it is feasible to spatially predict the seabed sediment composition across a large area of continental shelf in a repeatable and validated way. We also highlight the potential for further improvements to the method. PMID:26600040

  3. A Comprehensive, Quantitative, and Genome-Wide Model of Translation

    PubMed Central

    Siwiak, Marlena; Zielenkiewicz, Piotr

    2010-01-01

    Translation is still poorly characterised at the level of individual proteins and its role in regulation of gene expression has been constantly underestimated. To better understand the process of protein synthesis we developed a comprehensive and quantitative model of translation, characterising protein synthesis separately for individual genes. The main advantage of the model is that basing it on only a few datasets and general assumptions allows the calculation of many important translational parameters, which are extremely difficult to measure experimentally. In the model, each gene is attributed with a set of translational parameters, namely the absolute number of transcripts, ribosome density, mean codon translation time, total transcript translation time, total time required for translation initiation and elongation, translation initiation rate, mean mRNA lifetime, and absolute number of proteins produced by gene transcripts. Most parameters were calculated based on only one experimental dataset of genome-wide ribosome profiling. The model was implemented in Saccharomyces cerevisiae, and its results were compared with available data, yielding reasonably good correlations. The calculated coefficients were used to perform a global analysis of translation in yeast, revealing some interesting aspects of the process. We have shown that two commonly used measures of translation efficiency – ribosome density and number of protein molecules produced – are affected by two distinct factors. High values of both measures are caused, i.a., by very short times of translation initiation, however, the origins of initiation time reduction are completely different in both cases. The model is universal and can be applied to any organism, if the necessary input data are available. The model allows us to better integrate transcriptomic and proteomic data. A few other possibilities of the model utilisation are discussed concerning the example of the yeast system. PMID:20686685

  4. Modeling of X-Ray Fluorescence for Quantitative Analysis

    NASA Astrophysics Data System (ADS)

    Zarkadas, Charalambos

    2010-03-01

    Quantitative XRF algorithms involve mathematical procedures intended to solve a set of equations expressing the total fluorescence intensity of selected X-ray element lines emitted after sample irradiation by a photon source. These equations [1] have been derived under the assumptions of a parallel exciting beam and that of a perfectly flat and uniform sample and have been extended up to date to describe composite cases such as multilayered samples and samples exhibiting particle size effects. In state of the art algorithms the equations include most of the physical processes which can contribute to the measured fluorescence signal and make use of evaluated databases for the Fundamental Parameters included in the calculations. The accuracy of the results obtained depends on a great extent on the completeness of the model used to describe X-ray fluorescence intensities and on the compliance of the actual experimental conditions to the basic assumptions under which the mathematical formulas were derived.

  5. Existence of Global Weak Solution for Compressible Fluid Models of Korteweg Type

    NASA Astrophysics Data System (ADS)

    Haspot, Boris

    2011-06-01

    This work is devoted to proving existence of global weak solutions for a general isothermal model of capillary fluids derived by Dunn and Serrin (Arch Rational Mech Anal 88(2):95-133, 1985) which can be used as a phase transition model. We improve the results of Danchin and Desjardins (Annales de l'IHP, Analyse non linéaire 18:97-133, 2001) by showing the existence of global weak solution in dimension two for initial data in the energy space, close to a stable equilibrium and with specific choices on the capillary coefficients. In particular we are interested in capillary coefficients approximating a constant capillarity coefficient κ. To finish we show the existence of global weak solution in dimension one for a specific type of capillary coefficients with large initial data in the energy space.

  6. Existence of standard models of conic fibrations over non-algebraically-closed fields

    SciTech Connect

    Avilov, A A

    2014-12-31

    We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.

  7. Existence of global weak solution for a reduced gravity two and a half layer model

    SciTech Connect

    Guo, Zhenhua Li, Zilai Yao, Lei

    2013-12-15

    We investigate the existence of global weak solution to a reduced gravity two and a half layer model in one-dimensional bounded spatial domain or periodic domain. Also, we show that any possible vacuum state has to vanish within finite time, then the weak solution becomes a unique strong one.

  8. Leveraging an existing data warehouse to annotate workflow models for operations research and optimization.

    PubMed

    Borlawsky, Tara; LaFountain, Jeanne; Petty, Lynda; Saltz, Joel H; Payne, Philip R O

    2008-11-06

    Workflow analysis is frequently performed in the context of operations research and process optimization. In order to develop a data-driven workflow model that can be employed to assess opportunities to improve the efficiency of perioperative care teams at The Ohio State University Medical Center (OSUMC), we have developed a method for integrating standard workflow modeling formalisms, such as UML activity diagrams with data-centric annotations derived from our existing data warehouse.

  9. Monitoring with Trackers Based on Semi-Quantitative Models

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1997-01-01

    In three years of NASA-sponsored research preceding this project, we successfully developed a technology for: (1) building qualitative and semi-quantitative models from libraries of model-fragments, (2) simulating these models to predict future behaviors with the guarantee that all possible behaviors are covered, (3) assimilating observations into behaviors, shrinking uncertainty so that incorrect models are eventually refuted and correct models make stronger predictions for the future. In our object-oriented framework, a tracker is an object which embodies the hypothesis that the available observation stream is consistent with a particular behavior of a particular model. The tracker maintains its own status (consistent, superceded, or refuted), and answers questions about its explanation for past observations and its predictions for the future. In the MIMIC approach to monitoring of continuous systems, a number of trackers are active in parallel, representing alternate hypotheses about the behavior of a system. This approach is motivated by the need to avoid 'system accidents' [Perrow, 1985] due to operator fixation on a single hypothesis, as for example at Three Mile Island. As we began to address these issues, we focused on three major research directions that we planned to pursue over a three-year project: (1) tractable qualitative simulation, (2) semiquantitative inference, and (3) tracking set management. Unfortunately, funding limitations made it impossible to continue past year one. Nonetheless, we made major progress in the first two of these areas. Progress in the third area as slower because the graduate student working on that aspect of the project decided to leave school and take a job in industry. I enclosed a set of abstract of selected papers on the work describe below. Several papers that draw on the research supported during this period appeared in print after the grant period ended.

  10. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  11. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  12. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  13. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    NASA Technical Reports Server (NTRS)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried

  14. Understanding carbon catabolite repression in Escherichia coli using quantitative models.

    PubMed

    Kremling, A; Geiselmann, J; Ropers, D; de Jong, H

    2015-02-01

    Carbon catabolite repression (CCR) controls the order in which different carbon sources are metabolized. Although this system is one of the paradigms of the regulation of gene expression in bacteria, the underlying mechanisms remain controversial. CCR involves the coordination of different subsystems of the cell that are responsible for the uptake of carbon sources, their breakdown for the production of energy and precursors, and the conversion of the latter to biomass. The complexity of this integrated system, with regulatory mechanisms cutting across metabolism, gene expression, and signaling, and that are subject to global physical and physiological constraints, has motivated important modeling efforts over the past four decades, especially in the enterobacterium Escherichia coli. Different hypotheses concerning the dynamic functioning of the system have been explored by a variety of modeling approaches. We review these studies and summarize their contributions to the quantitative understanding of CCR, focusing on diauxic growth in E. coli. Moreover, we propose a highly simplified representation of diauxic growth that makes it possible to bring out the salient features of the models proposed in the literature and confront and compare the explanations they provide.

  15. Quantitative determination of guggulsterone in existing natural populations of Commiphora wightii (Arn.) Bhandari for identification of germplasm having higher guggulsterone content.

    PubMed

    Kulhari, Alpana; Sheorayan, Arun; Chaudhury, Ashok; Sarkar, Susheel; Kalia, Rajwant K

    2015-01-01

    Guggulsterone is an aromatic steroidal ketonic compound obtained from vertical rein ducts and canals of bark of Commiphora wightii (Arn.) Bhandari (Family - Burseraceae). Owing to its multifarious medicinal and therapeutic values as well as its various other significant bioactivities, guggulsterone has high demand in pharmaceutical, perfumery and incense industries. More and more pharmaceutical and perfumery industries are showing interest in guggulsterone, therefore, there is a need for its quantitative determination in existing natural populations of C. wightii. Identification of elite germplasm having higher guggulsterone content can be multiplied through conventional or biotechnological means. In the present study an effort was made to estimate two isoforms of guggulsterone i.e. E and Z guggulsterone in raw exudates of 75 accessions of C. wightii collected from three states of North-western India viz. Rajasthan (19 districts), Haryana (4 districts) and Gujarat (3 districts). Extracted steroid rich fraction from stem samples was fractionated using reverse-phase preparative High Performance Liquid Chromatography (HPLC) coupled with UV/VIS detector operating at wavelength of 250 nm. HPLC analysis of stem samples of wild as well as cultivated plants showed that the concentration of E and Z isomers as well as total guggulsterone was highest in Rajasthan, as compared to Haryana and Gujarat states. Highest concentration of E guggulsterone (487.45 μg/g) and Z guggulsterone (487.68 μg/g) was found in samples collected from Devikot (Jaisalmer) and Palana (Bikaner) respectively, the two hyper-arid regions of Rajasthan, India. Quantitative assay was presented on the basis of calibration curve obtained from a mixture of standard E and Z guggulsterones with different validatory parameters including linearity, selectivity and specificity, accuracy, auto-injector, flow-rate, recoveries, limit of detection and limit of quantification (as per norms of International

  16. Quantitative Modeling of the Alternative Pathway of the Complement System

    PubMed Central

    Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection. PMID

  17. Quantitative Modeling of the Alternative Pathway of the Complement System.

    PubMed

    Zewde, Nehemiah; Gorham, Ronald D; Dorado, Angel; Morikis, Dimitrios

    2016-01-01

    The complement system is an integral part of innate immunity that detects and eliminates invading pathogens through a cascade of reactions. The destructive effects of the complement activation on host cells are inhibited through versatile regulators that are present in plasma and bound to membranes. Impairment in the capacity of these regulators to function in the proper manner results in autoimmune diseases. To better understand the delicate balance between complement activation and regulation, we have developed a comprehensive quantitative model of the alternative pathway. Our model incorporates a system of ordinary differential equations that describes the dynamics of the four steps of the alternative pathway under physiological conditions: (i) initiation (fluid phase), (ii) amplification (surfaces), (iii) termination (pathogen), and (iv) regulation (host cell and fluid phase). We have examined complement activation and regulation on different surfaces, using the cellular dimensions of a characteristic bacterium (E. coli) and host cell (human erythrocyte). In addition, we have incorporated neutrophil-secreted properdin into the model highlighting the cross talk of neutrophils with the alternative pathway in coordinating innate immunity. Our study yields a series of time-dependent response data for all alternative pathway proteins, fragments, and complexes. We demonstrate the robustness of alternative pathway on the surface of pathogens in which complement components were able to saturate the entire region in about 54 minutes, while occupying less than one percent on host cells at the same time period. Our model reveals that tight regulation of complement starts in fluid phase in which propagation of the alternative pathway was inhibited through the dismantlement of fluid phase convertases. Our model also depicts the intricate role that properdin released from neutrophils plays in initiating and propagating the alternative pathway during bacterial infection.

  18. Mixed quantitative/qualitative modeling and simulation of the cardiovascular system.

    PubMed

    Nebot, A; Cellier, F E; Vallverdú, M

    1998-02-01

    The cardiovascular system is composed of the hemodynamical system and the central nervous system (CNS) control. Whereas the structure and functioning of the hemodynamical system are well known and a number of quantitative models have already been developed that capture the behavior of the hemodynamical system fairly accurately, the CNS control is, at present, still not completely understood and no good deductive models exist that are able to describe the CNS control from physical and physiological principles. The use of qualitative methodologies may offer an interesting alternative to quantitative modeling approaches for inductively capturing the behavior of the CNS control. In this paper, a qualitative model of the CNS control of the cardiovascular system is developed by means of the fuzzy inductive reasoning (FIR) methodology. FIR is a fairly new modeling technique that is based on the general system problem solving (GSPS) methodology developed by G.J. Klir (Architecture of Systems Problem Solving, Plenum Press, New York, 1985). Previous investigations have demonstrated the applicability of this approach to modeling and simulating systems, the structure of which is partially or totally unknown. In this paper, five separate controller models for different control actuations are described that have been identified independently using the FIR methodology. Then the loop between the hemodynamical system, modeled by means of differential equations, and the CNS control, modeled in terms of five FIR models, is closed, in order to study the behavior of the cardiovascular system as a whole. The model described in this paper has been validated for a single patient only. PMID:9568385

  19. A quantitative model for assessing community dynamics of pleistocene mammals.

    PubMed

    Lyons, S Kathleen

    2005-06-01

    Previous studies have suggested that species responded individualistically to the climate change of the last glaciation, expanding and contracting their ranges independently. Consequently, many researchers have concluded that community composition is plastic over time. Here I quantitatively assess changes in community composition over broad timescales and assess the effect of range shifts on community composition. Data on Pleistocene mammal assemblages from the FAUNMAP database were divided into four time periods (preglacial, full glacial, postglacial, and modern). Simulation analyses were designed to determine whether the degree of change in community composition is consistent with independent range shifts, given the distribution of range shifts observed. Results indicate that many of the communities examined in the United States were more similar through time than expected if individual range shifts were completely independent. However, in each time transition examined, there were areas of nonanalogue communities. I conducted sensitivity analyses to explore how the results were affected by the assumptions of the null model. Conclusions about changes in mammalian distributions and community composition are robust with respect to the assumptions of the model. Thus, whether because of biotic interactions or because of common environmental requirements, community structure through time is more complex than previously thought.

  20. Quantitative comparisons of analogue models of brittle wedge dynamics

    NASA Astrophysics Data System (ADS)

    Schreurs, Guido

    2010-05-01

    Analogue model experiments are widely used to gain insights into the evolution of geological structures. In this study, we present a direct comparison of experimental results of 14 analogue modelling laboratories using prescribed set-ups. A quantitative analysis of the results will document the variability among models and will allow an appraisal of reproducibility and limits of interpretation. This has direct implications for comparisons between structures in analogue models and natural field examples. All laboratories used the same frictional analogue materials (quartz and corundum sand) and prescribed model-building techniques (sieving and levelling). Although each laboratory used its own experimental apparatus, the same type of self-adhesive foil was used to cover the base and all the walls of the experimental apparatus in order to guarantee identical boundary conditions (i.e. identical shear stresses at the base and walls). Three experimental set-ups using only brittle frictional materials were examined. In each of the three set-ups the model was shortened by a vertical wall, which moved with respect to the fixed base and the three remaining sidewalls. The minimum width of the model (dimension parallel to mobile wall) was also prescribed. In the first experimental set-up, a quartz sand wedge with a surface slope of ˜20° was pushed by a mobile wall. All models conformed to the critical taper theory, maintained a stable surface slope and did not show internal deformation. In the next two experimental set-ups, a horizontal sand pack consisting of alternating quartz sand and corundum sand layers was shortened from one side by the mobile wall. In one of the set-ups a thin rigid sheet covered part of the model base and was attached to the mobile wall (i.e. a basal velocity discontinuity distant from the mobile wall). In the other set-up a basal rigid sheet was absent and the basal velocity discontinuity was located at the mobile wall. In both types of experiments

  1. Quantitative phase-field modeling for boiling phenomena.

    PubMed

    Badillo, Arnoldo

    2012-10-01

    A phase-field model is developed for quantitative simulation of bubble growth in the diffusion-controlled regime. The model accounts for phase change and surface tension effects at the liquid-vapor interface of pure substances with large property contrast. The derivation of the model follows a two-fluid approach, where the diffuse interface is assumed to have an internal microstructure, defined by a sharp interface. Despite the fact that phases within the diffuse interface are considered to have their own velocities and pressures, an averaging procedure at the atomic scale, allows for expressing all the constitutive equations in terms of mixture quantities. From the averaging procedure and asymptotic analysis of the model, nonconventional terms appear in the energy and phase-field equations to compensate for the variation of the properties across the diffuse interface. Without these new terms, no convergence towards the sharp-interface model can be attained. The asymptotic analysis also revealed a very small thermal capillary length for real fluids, such as water, that makes impossible for conventional phase-field models to capture bubble growth in the millimeter range size. For instance, important phenomena such as bubble growth and detachment from a hot surface could not be simulated due to the large number of grids points required to resolve all the scales. Since the shape of the liquid-vapor interface is primarily controlled by the effects of an isotropic surface energy (surface tension), a solution involving the elimination of the curvature from the phase-field equation is devised. The elimination of the curvature from the phase-field equation changes the length scale dominating the phase change from the thermal capillary length to the thickness of the thermal boundary layer, which is several orders of magnitude larger. A detailed analysis of the phase-field equation revealed that a split of this equation into two independent parts is possible for system sizes

  2. Quantitative property-structural relation modeling on polymeric dielectric materials

    NASA Astrophysics Data System (ADS)

    Wu, Ke

    Nowadays, polymeric materials have attracted more and more attention in dielectric applications. But searching for a material with desired properties is still largely based on trial and error. To facilitate the development of new polymeric materials, heuristic models built using the Quantitative Structure Property Relationships (QSPR) techniques can provide reliable "working solutions". In this thesis, the application of QSPR on polymeric materials is studied from two angles: descriptors and algorithms. A novel set of descriptors, called infinite chain descriptors (ICD), are developed to encode the chemical features of pure polymers. ICD is designed to eliminate the uncertainty of polymer conformations and inconsistency of molecular representation of polymers. Models for the dielectric constant, band gap, dielectric loss tangent and glass transition temperatures of organic polymers are built with high prediction accuracy. Two new algorithms, the physics-enlightened learning method (PELM) and multi-mechanism detection, are designed to deal with two typical challenges in material QSPR. PELM is a meta-algorithm that utilizes the classic physical theory as guidance to construct the candidate learning function. It shows better out-of-domain prediction accuracy compared to the classic machine learning algorithm (support vector machine). Multi-mechanism detection is built based on a cluster-weighted mixing model similar to a Gaussian mixture model. The idea is to separate the data into subsets where each subset can be modeled by a much simpler model. The case study on glass transition temperature shows that this method can provide better overall prediction accuracy even though less data is available for each subset model. In addition, the techniques developed in this work are also applied to polymer nanocomposites (PNC). PNC are new materials with outstanding dielectric properties. As a key factor in determining the dispersion state of nanoparticles in the polymer matrix

  3. Molecular Imaging of Tumor Hypoxia: Existing Problems and Their Potential Model-Based Solutions.

    PubMed

    Shi, Kuangyu; Ziegler, Sibylle I; Vaupel, Peter

    2016-01-01

    Molecular imaging of tissue hypoxia generates contrast in hypoxic areas by applying hypoxia-specific tracers in organisms. In cancer tissue, the injected tracer needs to be transported over relatively long distances and accumulates slowly in hypoxic regions. Thus, the signal-to-background ratio of hypoxia imaging is very small and a non-specific accumulation may suppress the real hypoxia-specific signals. In addition, the heterogeneous tumor microenvironment makes the assessment of the tissue oxygenation status more challenging. In this study, the diffusion potential of oxygen and of a hypoxia tracer for 4 different hypoxia subtypes: ischemic acute hypoxia, hypoxemic acute hypoxia, diffusion-limited chronic hypoxia and anemic chronic hypoxia are theoretically assessed. In particular, a reaction-diffusion equation is introduced to quantitatively analyze the interstitial diffusion of the hypoxia tracer [(18)F]FMISO. Imaging analysis strategies are explored based on reaction-diffusion simulations. For hypoxia imaging of low signal-to-background ratio, pharmacokinetic modelling has advantages to extract underlying specific binding signals from non-specific background signals and to improve the assessment of tumor oxygenation. Different pharmacokinetic models are evaluated for the analysis of the hypoxia tracer [(18)F]FMISO and optimal analysis model were identified accordingly. The improvements by model-based methods for the estimation of tumor oxygenation are in agreement with experimental data. The computational modelling offers a tool to explore molecular imaging of hypoxia and pharmacokinetic modelling is encouraged to be employed in the corresponding data analysis. PMID:27526129

  4. A study about the existence of the leverage effect in stochastic volatility models

    NASA Astrophysics Data System (ADS)

    Florescu, Ionuţ; Pãsãricã, Cristian Gabriel

    2009-02-01

    The empirical relationship between the return of an asset and the volatility of the asset has been well documented in the financial literature. Named the leverage effect or sometimes risk-premium effect, it is observed in real data that, when the return of the asset decreases, the volatility increases and vice versa. Consequently, it is important to demonstrate that any formulated model for the asset price is capable of generating this effect observed in practice. Furthermore, we need to understand the conditions on the parameters present in the model that guarantee the apparition of the leverage effect. In this paper we analyze two general specifications of stochastic volatility models and their capability of generating the perceived leverage effect. We derive conditions for the apparition of leverage effect in both of these stochastic volatility models. We exemplify using stochastic volatility models used in practice and we explicitly state the conditions for the existence of the leverage effect in these examples.

  5. Preparing for CTDMPLUS modeling analysis: Necessary enhancements to an existing meteorological monitoring network

    SciTech Connect

    Catizone, P.A.; Hoffnagle, G.F.; Murray, D.R.; Coble, T.D.

    1994-12-31

    The Clean Air Act Amendments (CAAA) promulgated by Congress in November 1990 had wide and immediate effect on numerous regulatory programs including State Implementation Plans (SIPs). The East Helena, Montana area was subject to the CAAA since a SIP had been submitted but not fully approved by EPA prior to November 1990. CTDMPLUS requires input of meteorological data previously not used by regulatory models and therefore not generally monitored in existing monitoring networks. To obtain the additional data, new instruments must be installed. This paper identifies the enhancements necessary to the existing network at the ASARCO plant to provide the requisite data for subsequent application of the refined complex terrain model. Details of the equipment and data acquisition are outlined and a summary of basic costs associated with the monitoring enhancements are provided.

  6. Existence of solutions to a new model of biological pattern formation

    NASA Astrophysics Data System (ADS)

    Alber, M.; Hentschel, H. G. E.; Kazmierczak, B.; Newman, S. A.

    2005-08-01

    In this paper we study the existence of classical solutions to a new model of skeletal development in the vertebrate limb. The model incorporates a general term describing adhesion interaction between cells and fibronectin, an extracellular matrix molecule secreted by the cells, as well as two secreted, diffusible regulators of fibronectin production, the positively-acting differentiation factor ("activator") TGF-[beta], and a negatively-acting factor ("inhibitor"). Together, these terms constitute a pattern forming system of equations. We analyze the conditions guaranteeing that smooth solutions exist globally in time. We prove that these conditions can be significantly relaxed if we add a diffusion term to the equation describing the evolution of fibronectin.

  7. Quantitative phase-field modeling for wetting phenomena.

    PubMed

    Badillo, Arnoldo

    2015-03-01

    A new phase-field model is developed for studying partial wetting. The introduction of a third phase representing a solid wall allows for the derivation of a new surface tension force that accounts for energy changes at the contact line. In contrast to other multi-phase-field formulations, the present model does not need the introduction of surface energies for the fluid-wall interactions. Instead, all wetting properties are included in a unique parameter known as the equilibrium contact angle θeq. The model requires the solution of a single elliptic phase-field equation, which, coupled to conservation laws for mass and linear momentum, admits the existence of steady and unsteady compact solutions (compactons). The representation of the wall by an additional phase field allows for the study of wetting phenomena on flat, rough, or patterned surfaces in a straightforward manner. The model contains only two free parameters, a measure of interface thickness W and β, which is used in the definition of the mixture viscosity μ=μlϕl+μvϕv+βμlϕw. The former controls the convergence towards the sharp interface limit and the latter the energy dissipation at the contact line. Simulations on rough surfaces show that by taking values for β higher than 1, the model can reproduce, on average, the effects of pinning events of the contact line during its dynamic motion. The model is able to capture, in good agreement with experimental observations, many physical phenomena fundamental to wetting science, such as the wetting transition on micro-structured surfaces and droplet dynamics on solid substrates. PMID:25871200

  8. An overview of existing modeling tools making use of model checking in the analysis of biochemical networks.

    PubMed

    Carrillo, Miguel; Góngora, Pedro A; Rosenblueth, David A

    2012-01-01

    Model checking is a well-established technique for automatically verifying complex systems. Recently, model checkers have appeared in computer tools for the analysis of biochemical (and gene regulatory) networks. We survey several such tools to assess the potential of model checking in computational biology. Next, our overview focuses on direct applications of existing model checkers, as well as on algorithms for biochemical network analysis influenced by model checking, such as those using binary decision diagrams (BDDs) or Boolean-satisfiability solvers. We conclude with advantages and drawbacks of model checking for the analysis of biochemical networks.

  9. An overview of existing modeling tools making use of model checking in the analysis of biochemical networks

    PubMed Central

    Carrillo, Miguel; Góngora, Pedro A.; Rosenblueth, David A.

    2012-01-01

    Model checking is a well-established technique for automatically verifying complex systems. Recently, model checkers have appeared in computer tools for the analysis of biochemical (and gene regulatory) networks. We survey several such tools to assess the potential of model checking in computational biology. Next, our overview focuses on direct applications of existing model checkers, as well as on algorithms for biochemical network analysis influenced by model checking, such as those using binary decision diagrams (BDDs) or Boolean-satisfiability solvers. We conclude with advantages and drawbacks of model checking for the analysis of biochemical networks. PMID:22833747

  10. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.

    PubMed

    Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-04-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes.

  11. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.

    PubMed

    Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-04-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes. PMID:27104857

  12. Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model

    SciTech Connect

    Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.

    2011-10-01

    We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.

  13. Normal fault growth above pre-existing structures: insights from discrete element modelling

    NASA Astrophysics Data System (ADS)

    Wrona, Thilo; Finch, Emma; Bell, Rebecca; Jackson, Christopher; Gawthorpe, Robert; Phillips, Thomas

    2016-04-01

    In extensional systems, pre-existing structures such as shear zones may affect the growth, geometry and location of normal faults. Recent seismic reflection-based observations from the North Sea suggest that shear zones not only localise deformation in the host rock, but also in the overlying sedimentary succession. While pre-existing weaknesses are known to localise deformation in the host rock, their effect on deformation in the overlying succession is less well understood. Here, we use 3-D discrete element modelling to determine if and how kilometre-scale shear zones affect normal fault growth in the overlying succession. Discrete element models use a large number of interacting particles to describe the dynamic evolution of complex systems. The technique has therefore been applied to describe fault and fracture growth in a variety of geological settings. We model normal faulting by extending a 60×60×30 km crustal rift-basin model including brittle and ductile interactions and gravitation and isostatic forces by 30%. An inclined plane of weakness which represents a pre-existing shear zone is introduced in the lower section of the upper brittle layer at the start of the experiment. The length, width, orientation and dip of the weak zone are systematically varied between experiments to test how these parameters control the geometric and kinematic development of overlying normal fault systems. Consistent with our seismic reflection-based observations, our results show that strain is indeed localised in and above these weak zones. In the lower brittle layer, normal faults nucleate, as expected, within the zone of weakness and control the initiation and propagation of neighbouring faults. Above this, normal faults nucleate throughout the overlying strata where their orientations are strongly influenced by the underlying zone of weakness. These results challenge the notion that overburden normal faults simply form due to reactivation and upwards propagation of pre-existing

  14. Existing General Population Models Inaccurately Predict Lung Cancer Risk in Patients Referred for Surgical Evaluation

    PubMed Central

    Isbell, James M.; Deppen, Stephen; Putnam, Joe B.; Nesbitt, Jonathan C.; Lambright, Eric S.; Dawes, Aaron; Massion, Pierre P.; Speroff, Theodore; Jones, David R.; Grogan, Eric L.

    2013-01-01

    Background atients undergoing resections for suspicious pulmonary lesions have a 9-55% benign rate. Validated prediction models exist to estimate the probability of malignancy in a general population and current practice guidelines recommend their use. We evaluated these models in a surgical population to determine the accuracy of existing models to predict benign or malignant disease. Methods We conducted a retrospective review of our thoracic surgery quality improvement database (2005-2008) to identify patients who underwent resection of a pulmonary lesion. Patients were stratified into subgroups based on age, smoking status and fluorodeoxyglucose positron emission tomography (PET) results. The probability of malignancy was calculated for each patient using the Mayo and SPN prediction models. Receiver operating characteristic (ROC) and calibration curves were used to measure model performance. Results 89 patients met selection criteria; 73% were malignant. Patients with preoperative PET scans were divided into 4 subgroups based on age, smoking history and nodule PET avidity. Older smokers with PET-avid lesions had a 90% malignancy rate. Patients with PET- non-avid lesions, or PET-avid lesions with age<50 years or never smokers of any age had a 62% malignancy rate. The area under the ROC curve for the Mayo and SPN models was 0.79 and 0.80, respectively; however, the models were poorly calibrated (p<0.001). Conclusions Despite improvements in diagnostic and imaging techniques, current general population models do not accurately predict lung cancer among patients ref erred for surgical evaluation. Prediction models with greater accuracy are needed to identify patients with benign disease to reduce non-therapeutic resections. PMID:21172518

  15. Pesticide exposure assessment in rice paddies in Europe: a comparative study of existing mathematical models.

    PubMed

    Karpouzas, Dimitrios G; Cervelli, Stefano; Watanabe, Hirozumi; Capri, Ettore; Ferrero, Aldo

    2006-07-01

    A comparative test was undertaken in order to identify the potential of existing mathematical models, including the rice water quality (RICEWQ) 1.6.4v model, the pesticide concentration in paddy field (PCPF-1) model and the surface water and groundwater (SWAGW) model, for calculating pesticide dissipation and exposure in rice paddies in Europe. Previous versions of RICEWQ and PCPF-1 models had been validated under European and Japanese conditions respectively, unlike the SWAGW model which was only recently developed as a tier-2 modelling tool. Two datasets, derived from field dissipation studies undertaken in northern Italy with the herbicides cinosulfuron and pretilachlor, were used for the modelling exercise. All models were parameterized according to field experimentations, as far as possible, considering their individual deficiencies. Models were not calibrated against field data in order to remove bias in the comparison of the results. RICEWQ 1.6.4v provided the highest agreement between measured and predicted pesticide concentrations in both paddy water and paddy soil, with modelling efficiency (EF) values ranging from 0.78 to 0.93. PCPF-1 simulated well the dissipation of herbicides in paddy water, but significantly underestimated the concentrations of pretilachlor, a chemical with high affinity for soil sorption, in paddy soil. SWAGW simulated relatively well the dissipation of both herbicides in paddy water, and especially pretilachlor, but failed to predict closely the pesticide dissipation in paddy soil. Both RICEWQ and SWAGW provided low groundwater (GW) predicted environmental concentrations (PECs), suggesting a low risk of GW contamination for the two herbicides. Overall, this modelling exercise suggested that RICEWQ 1.6.4v is currently the most reliable model for higher-tier exposure assessment in rice paddies in Europe. PCPF-1 and SWAGW showed promising results, but further adjustments are required before these models can be considered as strong

  16. Testing process predictions of models of risky choice: a quantitative model comparison approach

    PubMed Central

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  17. A review of existing models and methods to estimate employment effects of pollution control policies

    SciTech Connect

    Darwin, R.F.; Nesse, R.J.

    1988-02-01

    The purpose of this paper is to provide information about existing models and methods used to estimate coal mining employment impacts of pollution control policies. The EPA is currently assessing the consequences of various alternative policies to reduce air pollution. One important potential consequence of these policies is that coal mining employment may decline or shift from low-sulfur to high-sulfur coal producing regions. The EPA requires models that can estimate the magnitude and cost of these employment changes at the local level. This paper contains descriptions and evaluations of three models and methods currently used to estimate the size and cost of coal mining employment changes. The first model reviewed is the Coal and Electric Utilities Model (CEUM), a well established, general purpose model that has been used by the EPA and other groups to simulate air pollution control policies. The second model reviewed is the Advanced Utility Simulation Model (AUSM), which was developed for the EPA specifically to analyze the impacts of air pollution control policies. Finally, the methodology used by Arthur D. Little, Inc. to estimate the costs of alternative air pollution control policies for the Consolidated Coal Company is discussed. These descriptions and evaluations are based on information obtained from published reports and from draft documentation of the models provided by the EPA. 12 refs., 1 fig.

  18. Epistasis analysis for quantitative traits by functional regression model.

    PubMed

    Zhang, Futao; Boerwinkle, Eric; Xiong, Momiao

    2014-06-01

    The critical barrier in interaction analysis for rare variants is that most traditional statistical methods for testing interactions were originally designed for testing the interaction between common variants and are difficult to apply to rare variants because of their prohibitive computational time and poor ability. The great challenges for successful detection of interactions with next-generation sequencing (NGS) data are (1) lack of methods for interaction analysis with rare variants, (2) severe multiple testing, and (3) time-consuming computations. To meet these challenges, we shift the paradigm of interaction analysis between two loci to interaction analysis between two sets of loci or genomic regions and collectively test interactions between all possible pairs of SNPs within two genomic regions. In other words, we take a genome region as a basic unit of interaction analysis and use high-dimensional data reduction and functional data analysis techniques to develop a novel functional regression model to collectively test interactions between all possible pairs of single nucleotide polymorphisms (SNPs) within two genome regions. By intensive simulations, we demonstrate that the functional regression models for interaction analysis of the quantitative trait have the correct type 1 error rates and a much better ability to detect interactions than the current pairwise interaction analysis. The proposed method was applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) and CHARGE-S study. We discovered 27 pairs of genes showing significant interactions after applying the Bonferroni correction (P-values < 4.58 × 10(-10)) in the ESP, and 11 were replicated in the CHARGE-S study.

  19. Evaluation Between Existing and Improved CCF Modeling Using the NRC SPAR Models

    SciTech Connect

    James K. Knudsen

    2010-06-01

    Abstract: The NRC SPAR models currently employ the alpha factor common cause failure (CCF) methodology and model CCF for a group of redundant components as a single “rolled-up” basic event. These SPAR models will be updated to employ a more computationally intensive and accurate approach by expanding the CCF basic events for all active components to include all terms that appear in the Basic Parameter Model (BPM). A discussion is provided to detail the differences between the rolled-up common cause group (CCG) and expanded BPM adjustment concepts based on differences in core damage frequency and individual component importance measures. Lastly, a hypothetical condition is evaluated with a SPAR model to show the difference in results between the current adjustment method (rolled-up CCF events) and the newer method employing all of the expanded terms in the BPM. The event evaluation on the SPAR model employing the expanded terms will be solved using the graphical evaluation module (GEM) and the proposed method discussed in Reference 1.

  20. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. PMID:26058849

  1. Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.

    PubMed

    Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao

    2015-08-01

    Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies.

  2. Uncertainty in Quantitative Precipitation Estimates and Forecasts in a Hydrologic Modeling Context (Invited)

    NASA Astrophysics Data System (ADS)

    Gourley, J. J.; Kirstetter, P.; Hong, Y.; Hardy, J.; Flamig, Z.

    2013-12-01

    This study presents a methodology to account for uncertainty in radar-based rainfall rate estimation using NOAA/NSSL's Multi-Radar Multisensor (MRMS) products. The focus of the study in on flood forecasting, including flash floods, in ungauged catchments throughout the conterminous US. An error model is used to derive probability distributions of rainfall rates that explicitly accounts for rain typology and uncertainty in the reflectivity-to-rainfall relationships. This approach preserves the fine space/time sampling properties (2 min/1 km) of the radar and conditions probabilistic quantitative precipitation estimates (PQPE) on the rain rate and rainfall type. Uncertainty in rainfall amplitude is the primary factor that is accounted for in the PQPE development. Additional uncertainties due to rainfall structures, locations, and timing must be considered when using quantitative precipitation forecast (QPF) products as forcing to a hydrologic model. A new method will be presented that shows how QPF ensembles are used in a hydrologic modeling context to derive probabilistic flood forecast products. This method considers the forecast rainfall intensity and morphology superimposed on pre-existing hydrologic conditions to identify basin scales that are most at risk.

  3. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models.

    PubMed

    Shimayoshi, Takao; Cha, Chae Young; Amano, Akira

    2015-01-01

    Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics. PMID:26091413

  4. Quantitative Decomposition of Dynamics of Mathematical Cell Models: Method and Application to Ventricular Myocyte Models

    PubMed Central

    Shimayoshi, Takao; Cha, Chae Young; Amano, Akira

    2015-01-01

    Mathematical cell models are effective tools to understand cellular physiological functions precisely. For detailed analysis of model dynamics in order to investigate how much each component affects cellular behaviour, mathematical approaches are essential. This article presents a numerical analysis technique, which is applicable to any complicated cell model formulated as a system of ordinary differential equations, to quantitatively evaluate contributions of respective model components to the model dynamics in the intact situation. The present technique employs a novel mathematical index for decomposed dynamics with respect to each differential variable, along with a concept named instantaneous equilibrium point, which represents the trend of a model variable at some instant. This article also illustrates applications of the method to comprehensive myocardial cell models for analysing insights into the mechanisms of action potential generation and calcium transient. The analysis results exhibit quantitative contributions of individual channel gating mechanisms and ion exchanger activities to membrane repolarization and of calcium fluxes and buffers to raising and descending of the cytosolic calcium level. These analyses quantitatively explicate principle of the model, which leads to a better understanding of cellular dynamics. PMID:26091413

  5. Numerical Modelling of Extended Leak-Off Test with a Pre-Existing Fracture

    NASA Astrophysics Data System (ADS)

    Lavrov, A.; Larsen, I.; Bauer, A.

    2016-04-01

    Extended leak-off test (XLOT) is one of the few techniques available for stress measurements in oil and gas wells. Interpretation of the test is often difficult since the results depend on a multitude of factors, including the presence of natural or drilling-induced fractures in the near-well area. Coupled numerical modelling of XLOT has been performed to investigate the pressure behaviour during the flowback phase as well as the effect of a pre-existing fracture on the test results in a low-permeability formation. Essential features of XLOT known from field measurements are captured by the model, including the saw-tooth shape of the pressure vs injected volume curve, and the change of slope in the pressure vs time curve during flowback used by operators as an indicator of the bottomhole pressure reaching the minimum in situ stress. Simulations with a pre-existing fracture running from the borehole wall in the radial direction have revealed that the results of XLOT are quite sensitive to the orientation of the pre-existing fracture. In particular, the fracture initiation pressure and the formation breakdown pressure increase steadily with decreasing angle between the fracture and the minimum in situ stress. Our findings seem to invalidate the use of the fracture initiation pressure and the formation breakdown pressure for stress measurements or rock strength evaluation purposes.

  6. Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy

    ERIC Educational Resources Information Center

    Smith, Rachel; Cantrell, Kevin

    2007-01-01

    Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.

  7. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments.

  8. Fit for purpose application of currently existing animal models in the discovery of novel epilepsy therapies.

    PubMed

    Löscher, Wolfgang

    2016-10-01

    Animal seizure and epilepsy models continue to play an important role in the early discovery of new therapies for the symptomatic treatment of epilepsy. Since 1937, with the discovery of phenytoin, almost all anti-seizure drugs (ASDs) have been identified by their effects in animal models, and millions of patients world-wide have benefited from the successful translation of animal data into the clinic. However, several unmet clinical needs remain, including resistance to ASDs in about 30% of patients with epilepsy, adverse effects of ASDs that can reduce quality of life, and the lack of treatments that can prevent development of epilepsy in patients at risk following brain injury. The aim of this review is to critically discuss the translational value of currently used animal models of seizures and epilepsy, particularly what animal models can tell us about epilepsy therapies in patients and which limitations exist. Principles of translational medicine will be used for this discussion. An essential requirement for translational medicine to improve success in drug development is the availability of animal models with high predictive validity for a therapeutic drug response. For this requirement, the model, by definition, does not need to be a perfect replication of the clinical condition, but it is important that the validation provided for a given model is fit for purpose. The present review should guide researchers in both academia and industry what can and cannot be expected from animal models in preclinical development of epilepsy therapies, which models are best suited for which purpose, and for which aspects suitable models are as yet not available. Overall further development is needed to improve and validate animal models for the diverse areas in epilepsy research where suitable fit for purpose models are urgently needed in the search for more effective treatments. PMID:27505294

  9. Local Existence of Weak Solutions to Kinetic Models of Granular Media

    NASA Astrophysics Data System (ADS)

    Agueh, Martial

    2016-08-01

    We prove in any dimension {d ≥q 1} a local in time existence of weak solutions to the Cauchy problem for the kinetic equation of granular media, partial_t f+v\\cdot nabla_x f = {div}_v[f(nabla W *_v f)] when the initial data are nonnegative, integrable and bounded functions with compact support in velocity, and the interaction potential {W} is a {C^2({{R}}^d)} radially symmetric convex function. Our proof is constructive and relies on a splitting argument in position and velocity, where the spatially homogeneous equation is interpreted as the gradient flow of a convex interaction energy with respect to the quadratic Wasserstein distance. Our result generalizes the local existence result obtained by Benedetto et al. (RAIRO Modél Math Anal Numér 31(5):615-641, 1997) on the one-dimensional model of this equation for a cubic power-law interaction potential.

  10. Unbiased Quantitative Models of Protein Translation Derived from Ribosome Profiling Data

    PubMed Central

    Gritsenko, Alexey A.; Hulsman, Marc; Reinders, Marcel J. T.; de Ridder, Dick

    2015-01-01

    Translation of RNA to protein is a core process for any living organism. While for some steps of this process the effect on protein production is understood, a holistic understanding of translation still remains elusive. In silico modelling is a promising approach for elucidating the process of protein synthesis. Although a number of computational models of the process have been proposed, their application is limited by the assumptions they make. Ribosome profiling (RP), a relatively new sequencing-based technique capable of recording snapshots of the locations of actively translating ribosomes, is a promising source of information for deriving unbiased data-driven translation models. However, quantitative analysis of RP data is challenging due to high measurement variance and the inability to discriminate between the number of ribosomes measured on a gene and their speed of translation. We propose a solution in the form of a novel multi-scale interpretation of RP data that allows for deriving models with translation dynamics extracted from the snapshots. We demonstrate the usefulness of this approach by simultaneously determining for the first time per-codon translation elongation and per-gene translation initiation rates of Saccharomyces cerevisiae from RP data for two versions of the Totally Asymmetric Exclusion Process (TASEP) model of translation. We do this in an unbiased fashion, by fitting the models using only RP data with a novel optimization scheme based on Monte Carlo simulation to keep the problem tractable. The fitted models match the data significantly better than existing models and their predictions show better agreement with several independent protein abundance datasets than existing models. Results additionally indicate that the tRNA pool adaptation hypothesis is incomplete, with evidence suggesting that tRNA post-transcriptional modifications and codon context may play a role in determining codon elongation rates. PMID:26275099

  11. Results from the ESA SREM Monitors and Comparison with Existing Radiation Belt Models

    NASA Astrophysics Data System (ADS)

    Evans, H. D. R.; Bühler, P.; Hajdas, W.; Daly, E.; Nieminen, P.; Mohammadzadeh, A.

    The Standard Radiation Monitor SREM is a simple particle detector developed for wide application on ESA satellites It measures high-energy protons and electrons of the space environment with a - 20o angular resolution and limited spectral information Of the ten SREMs that have been manufactured four have so far flown The first model on STRV-1c functioned well until an early spacecraft failure The other three are on board the ESA spacecraft INTEGRAL ROSETTA and PROBA-1 Another model will fly on GIOVE-B expected to be launched later this year The diverse orbits of these spacecraft and the common calibration of the monitors provides a unique dataset covering a wide range of B-L space providing a direct comparison of the radiation levels in the belts at different locations and the effects of geomagnetic shielding Data from the PROBA SREM and INTEGRAL SREM are compared with existing radiation belt models

  12. Using Existing Arctic Atmospheric Mercury Measurements to Refine Global and Regional Scale Atmospheric Transport Models

    NASA Astrophysics Data System (ADS)

    Moore, C. W.; Dastoor, A.; Steffen, A.; Nghiem, S. V.; Agnan, Y.; Obrist, D.

    2015-12-01

    Northern hemisphere background atmospheric concentrations of gaseous elemental mercury (GEM) have been declining by up to 25% over the last ten years at some lower latitude sites. However, this decline has ranged from no decline to 9% over 10 years at Arctic long-term measurement sites. Measurements also show a highly dynamic nature of mercury (Hg) species in Arctic air and snow from early spring to the end of summer when biogeochemical transformations peak. Currently, models are unable to reproduce this variability accurately. Estimates of Hg accumulation in the Arctic and Arctic Ocean by models require a full mechanistic understanding of the multi-phase redox chemistry of Hg in air and snow as well as the role of meteorology in the physicochemical processes of Hg. We will show how findings from ground-based atmospheric Hg measurements like those made in spring 2012 during the Bromine, Ozone and Mercury Experiment (BROMEX) near Barrow, Alaska can be used to reduce the discrepancy between measurements and model output in the Canadian GEM-MACH-Hg model. The model is able to reproduce and to explain some of the variability in Arctic Hg measurements but discrepancies still remain. One improvement involves incorporation of new physical mechanisms such as the one we were able to identify during BROMEX. This mechanism, by which atmospheric mercury depletion events are abruptly ended via sea ice leads opening and inducing shallow convective mixing that replenishes GEM (and ozone) in the near surface atmospheric layer, causing an immediate recovery from the depletion event, is currently lacking in models. Future implementation of this physical mechanism will have to incorporate current remote sensing sea ice products but also rely on the development of products that can identify sea ice leads quantitatively. In this way, we can advance the knowledge of the dynamic nature of GEM in the Arctic and the impact of climate change along with new regulations on the overall

  13. Herd immunity and pneumococcal conjugate vaccine: a quantitative model.

    PubMed

    Haber, Michael; Barskey, Albert; Baughman, Wendy; Barker, Lawrence; Whitney, Cynthia G; Shaw, Kate M; Orenstein, Walter; Stephens, David S

    2007-07-20

    Invasive pneumococcal disease in older children and adults declined markedly after introduction in 2000 of the pneumococcal conjugate vaccine for young children. An empirical quantitative model was developed to estimate the herd (indirect) effects on the incidence of invasive disease among persons >or=5 years of age induced by vaccination of young children with 1, 2, or >or=3 doses of the pneumococcal conjugate vaccine, Prevnar (PCV7), containing serotypes 4, 6B, 9V, 14, 18C, 19F and 23F. From 1994 to 2003, cases of invasive pneumococcal disease were prospectively identified in Georgia Health District-3 (eight metropolitan Atlanta counties) by Active Bacterial Core surveillance (ABCs). From 2000 to 2003, vaccine coverage levels of PCV7 for children aged 19-35 months in Fulton and DeKalb counties (of Atlanta) were estimated from the National Immunization Survey (NIS). Based on incidence data and the estimated average number of doses received by 15 months of age, a Poisson regression model was fit, describing the trend in invasive pneumococcal disease in groups not targeted for vaccination (i.e., adults and older children) before and after the introduction of PCV7. Highly significant declines in all the serotypes contained in PCV7 in all unvaccinated populations (5-19, 20-39, 40-64, and >64 years) from 2000 to 2003 were found under the model. No significant change in incidence was seen from 1994 to 1999, indicating rates were stable prior to vaccine introduction. Among unvaccinated persons 5+ years of age, the modeled incidence of disease caused by PCV7 serotypes as a group dropped 38.4%, 62.0%, and 76.6% for 1, 2, and 3 doses, respectively, received on average by the population of children by the time they are 15 months of age. Incidence of serotypes 14 and 23F had consistent significant declines in all unvaccinated age groups. In contrast, the herd immunity effects on vaccine-related serotype 6A incidence were inconsistent. Increasing trends of non

  14. Development of an Experimental Model of Diabetes Co-Existing with Metabolic Syndrome in Rats

    PubMed Central

    Suman, Rajesh Kumar; Ray Mohanty, Ipseeta; Borde, Manjusha K.; Maheshwari, Ujwala; Deshmukh, Y. A.

    2016-01-01

    Background. The incidence of metabolic syndrome co-existing with diabetes mellitus is on the rise globally. Objective. The present study was designed to develop a unique animal model that will mimic the pathological features seen in individuals with diabetes and metabolic syndrome, suitable for pharmacological screening of drugs. Materials and Methods. A combination of High-Fat Diet (HFD) and low dose of streptozotocin (STZ) at 30, 35, and 40 mg/kg was used to induce metabolic syndrome in the setting of diabetes mellitus in Wistar rats. Results. The 40 mg/kg STZ produced sustained hyperglycemia and the dose was thus selected for the study to induce diabetes mellitus. Various components of metabolic syndrome such as dyslipidemia {(increased triglyceride, total cholesterol, LDL cholesterol, and decreased HDL cholesterol)}, diabetes mellitus (blood glucose, HbA1c, serum insulin, and C-peptide), and hypertension {systolic blood pressure} were mimicked in the developed model of metabolic syndrome co-existing with diabetes mellitus. In addition to significant cardiac injury, atherogenic index, inflammation (hs-CRP), decline in hepatic and renal function were observed in the HF-DC group when compared to NC group rats. The histopathological assessment confirmed presence of edema, necrosis, and inflammation in heart, pancreas, liver, and kidney of HF-DC group as compared to NC. Conclusion. The present study has developed a unique rodent model of metabolic syndrome, with diabetes as an essential component. PMID:26880906

  15. From sample to signal in laser-induced breakdown spectroscopy: An experimental assessment of existing algorithms and theoretical modeling approaches

    NASA Astrophysics Data System (ADS)

    Herrera, Kathleen Kate

    In recent years, laser-induced breakdown spectroscopy (LIBS) has become an increasingly popular technique for many diverse applications. This is mainly due to its numerous attractive features including minimal to no sample preparation, minimal sample invasiveness, sample versatility, remote detection capability and simultaneous multi-elemental capability. However, most of LIBS applications are limited to semi-quantitative or relative analysis due to the difficulty in finding matrix-matched standards or a constant reference component in the system for calibration purposes. Therefore, methods which do not require the use of reference standards, hence, standard-free, are highly desired. In this research, a general LIBS system was constructed, calibrated and optimized. The corresponding instrumental function and relative spectral efficiency of the detection system were also investigated. In addition, development of a spectral acquisition method was necessary so that data in the wide spectral range from 220 to 700 nm may be obtained using a non-echelle detection system. This requires multiple acquisitions of successive spectral windows and splicing the windows together with optimum overlap using an in-house program written in Q-basic. Two existing standard-free approaches, the calibration-free LIBS (CF-LIBS) technique and the Monte Carlo simulated annealing optimization modeling algorithm for LIBS (MC-LIBS), were experimentally evaluated in this research. The CF-LIBS approach, which is based on the Boltzmann plot method, is used to directly evaluate the plasma temperature, electron number density and relative concentrations of species present in a given sample without the need for reference standards. In the second approach, the initial value problem is solved based on the model of a radiative plasma expanding into vacuum. Here, the prediction of the initial plasma conditions (i.e., temperature and elemental number densities) is achieved by a step-wise Monte Carlo

  16. Existence and qualitative properties of travelling waves for an epidemiological model with mutations

    NASA Astrophysics Data System (ADS)

    Griette, Quentin; Raoul, Gaël

    2016-05-01

    In this article, we are interested in a non-monotonic system of logistic reaction-diffusion equations. This system of equations models an epidemic where two types of pathogens are competing, and a mutation can change one type into the other with a certain rate. We show the existence of travelling waves with minimal speed, which are usually non-monotonic. Then we provide a description of the shape of those constructed travelling waves, and relate them to some Fisher-KPP fronts with non-minimal speed.

  17. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    PubMed

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards.

  18. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation

  19. Existence and exponential stability of positive almost periodic solution for Nicholson's blowflies models on time scales.

    PubMed

    Li, Yongkun; Li, Bing

    2016-01-01

    In this paper, we first give a new definition of almost periodic time scales, two new definitions of almost periodic functions on time scales and investigate some basic properties of them. Then, as an application, by using a fixed point theorem in Banach space and the time scale calculus theory, we obtain some sufficient conditions for the existence and exponential stability of positive almost periodic solutions for a class of Nicholson's blowflies models on time scales. Finally, we present an illustrative example to show the effectiveness of obtained results. Our results show that under a simple condition the continuous-time Nicholson's blowflies model and its discrete-time analogue have the same dynamical behaviors. PMID:27468397

  20. An existence result for a model of complete damage in elastic materials with reversible evolution

    NASA Astrophysics Data System (ADS)

    Bonetti, Elena; Freddi, Francesco; Segatti, Antonio

    2016-07-01

    In this paper, we consider a model describing evolution of damage in elastic materials, in which stiffness completely degenerates once the material is fully damaged. The model is written by using a phase transition approach, with respect to the damage parameter. In particular, a source of damage is represented by a quadratic form involving deformations, which vanishes in the case of complete damage. Hence, an internal constraint is ensured by a maximal monotone operator. The evolution of damage is considered "reversible", in the sense that the material may repair itself. We can prove an existence result for a suitable weak formulation of the problem, rewritten in terms of a new variable (an internal stress). Some numerical simulations are presented in agreement with the mathematical analysis of the system.

  1. Pre-existing discontinuities and oblique rifting in the Kenya Rift: Comparisons with analogue models.

    NASA Astrophysics Data System (ADS)

    Rolet, J.; Gloaguen, R.; Gloaguen, R.; Dooley, T.; McClay, K.

    2001-12-01

    Oblique rift structures such as the SSE-trending Aswa Transverse Zone in the Kenya rift are poorly understood and are rarely taken into account in geometric and kinematic models for the origin of this rift zone. However, remote sensing demonstrates that transverse structures are quite numerous and have a significant influence on the geometry and segmentation of the rift and the development of faults within or at the boundaries of the oblique zones. The importance of these transverse zones varies depending on their orientation and position with respect to the main rift. The origin of these oblique zones can be directly related to pre-existing fabrics and shear zones in the Precambrian basement and thus act as mechanically distinct structural domains during later extensional events. In order to assess the importance and role of these oblique structures we used optical (SPOT, LANDSAT) and microwave (RADARSAT, ERS) data combined with field observations and measurements. Collected structural data were then compared with scaled physical models of orthogonal and oblique rifting in order to refine the rift model. The data and comparison with physical models suggest that these transverse zones are best described as oblique rift zones where the rift border faults are parallel to the basement grain whereas intra-rift fault systems form orthogonal to the extension direction. This model also implies that the present day extension direction in Kenya is oriented E-W.

  2. Existence of the critical endpoint in the vector meson extended linear sigma model

    NASA Astrophysics Data System (ADS)

    Kovács, P.; Szép, Zs.; Wolf, Gy.

    2016-06-01

    The chiral phase transition of the strongly interacting matter is investigated at nonzero temperature and baryon chemical potential (μB) within an extended (2 +1 ) flavor Polyakov constituent quark-meson model that incorporates the effect of the vector and axial vector mesons. The effect of the fermionic vacuum and thermal fluctuations computed from the grand potential of the model is taken into account in the curvature masses of the scalar and pseudoscalar mesons. The parameters of the model are determined by comparing masses and tree-level decay widths with experimental values in a χ2-minimization procedure that selects between various possible assignments of scalar nonet states to physical particles. We examine the restoration of the chiral symmetry by monitoring the temperature evolution of condensates and the chiral partners' masses and of the mixing angles for the pseudoscalar η -η' and the corresponding scalar complex. We calculate the pressure and various thermodynamical observables derived from it and compare them to the continuum extrapolated lattice results of the Wuppertal-Budapest collaboration. We study the T -μB phase diagram of the model and find that a critical endpoint exists for parameters of the model, which give acceptable values of χ2.

  3. Model based prediction of the existence of the spontaneous cochlear microphonic

    NASA Astrophysics Data System (ADS)

    Ayat, Mohammad; Teal, Paul D.

    2015-12-01

    In the mammalian cochlea, self-sustaining oscillation of the basilar membrane in the cochlea can cause vibration of the ear drum, and produce spontaneous narrow-band air pressure fluctuations in the ear canal. These spontaneous fluctuations are known as spontaneous otoacoustic emissions. Small perturbations in feedback gain of the cochlear amplifier have been proposed to be the generation source of self-sustaining oscillations of the basilar membrane. We hypothesise that the self-sustaining oscillation resulting from small perturbations in feedback gain produce spontaneous potentials in the cochlea. We demonstrate that according to the results of the model, a measurable spontaneous cochlear microphonic must exist in the human cochlea. The existence of this signal has not yet been reported. However, this spontaneous electrical signal could play an important role in auditory research. Successful or unsuccessful recording of this signal will indicate whether previous hypotheses about the generation source of spontaneous otoacoustic emissions are valid or should be amended. In addition according to the proposed model spontaneous cochlear microphonic is basically an electrical analogue of spontaneous otoacoustic emissions. In certain experiments, spontaneous cochlear microphonic may be more easily detected near its generation site with proper electrical instrumentation than is spontaneous otoacoustic emission.

  4. Daphnia and fish toxicity of (benzo)triazoles: validated QSAR models, and interspecies quantitative activity-activity modelling.

    PubMed

    Cassani, Stefano; Kovarich, Simona; Papa, Ester; Roy, Partha Pratim; van der Wal, Leon; Gramatica, Paola

    2013-08-15

    Due to their chemical properties synthetic triazoles and benzo-triazoles ((B)TAZs) are mainly distributed to the water compartments in the environment, and because of their wide use the potential effects on aquatic organisms are cause of concern. Non testing approaches like those based on quantitative structure-activity relationships (QSARs) are valuable tools to maximize the information contained in existing experimental data and predict missing information while minimizing animal testing. In the present study, externally validated QSAR models for the prediction of acute (B)TAZs toxicity in Daphnia magna and Oncorhynchus mykiss have been developed according to the principles for the validation of QSARs and their acceptability for regulatory purposes, proposed by the Organization for Economic Co-operation and Development (OECD). These models are based on theoretical molecular descriptors, and are statistically robust, externally predictive and characterized by a verifiable structural applicability domain. They have been applied to predict acute toxicity for over 300 (B)TAZs without experimental data, many of which are in the pre-registration list of the REACH regulation. Additionally, a model based on quantitative activity-activity relationships (QAAR) has been developed, which allows for interspecies extrapolation from daphnids to fish. The importance of QSAR/QAAR, especially when dealing with specific chemical classes like (B)TAZs, for screening and prioritization of pollutants under REACH, has been highlighted.

  5. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models.

    PubMed

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-08-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality.However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management.We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee - varroa mite - virus interactions.We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes.Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions.

  6. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models.

    PubMed

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-08-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality.However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management.We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee - varroa mite - virus interactions.We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes.Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions. PMID:24223431

  7. Towards a systems approach for understanding honeybee decline: a stocktaking and synthesis of existing models

    PubMed Central

    Becher, Matthias A; Osborne, Juliet L; Thorbek, Pernille; Kennedy, Peter J; Grimm, Volker

    2013-01-01

    The health of managed and wild honeybee colonies appears to have declined substantially in Europe and the United States over the last decade. Sustainability of honeybee colonies is important not only for honey production, but also for pollination of crops and wild plants alongside other insect pollinators. A combination of causal factors, including parasites, pathogens, land use changes and pesticide usage, are cited as responsible for the increased colony mortality. However, despite detailed knowledge of the behaviour of honeybees and their colonies, there are no suitable tools to explore the resilience mechanisms of this complex system under stress. Empirically testing all combinations of stressors in a systematic fashion is not feasible. We therefore suggest a cross-level systems approach, based on mechanistic modelling, to investigate the impacts of (and interactions between) colony and land management. We review existing honeybee models that are relevant to examining the effects of different stressors on colony growth and survival. Most of these models describe honeybee colony dynamics, foraging behaviour or honeybee – varroa mite – virus interactions. We found that many, but not all, processes within honeybee colonies, epidemiology and foraging are well understood and described in the models, but there is no model that couples in-hive dynamics and pathology with foraging dynamics in realistic landscapes. Synthesis and applications. We describe how a new integrated model could be built to simulate multifactorial impacts on the honeybee colony system, using building blocks from the reviewed models. The development of such a tool would not only highlight empirical research priorities but also provide an important forecasting tool for policy makers and beekeepers, and we list examples of relevant applications to bee disease and landscape management decisions. PMID:24223431

  8. Quantitative Structure--Activity Relationship Modeling of Rat Acute Toxicity by Oral Exposure

    EPA Science Inventory

    Background: Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. Objective: In this study, a combinatorial QSAR approach has been employed for the creation of robust and predictive models of acute toxi...

  9. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  10. Using Existing Coastal Models To Address Ocean Acidification Modeling Needs: An Inside Look at Several East and Gulf Coast Regions

    NASA Astrophysics Data System (ADS)

    Jewett, E.

    2013-12-01

    Ecosystem forecast models have been in development for many US coastal regions for decades in an effort to understand how certain drivers, such as nutrients, freshwater and sediments, affect coastal water quality. These models have been used to inform coastal management interventions such as imposition of total maximum daily load allowances for nutrients or sediments to control hypoxia, harmful algal blooms and/or water clarity. Given the overlap of coastal acidification with hypoxia, it seems plausible that the geochemical models built to explain hypoxia and/or HABs might also be used, with additional terms, to understand how atmospheric CO2 is interacting with local biogeochemical processes to affect coastal waters. Examples of existing biogeochemical models from Galveston, the northern Gulf of Mexico, Tampa Bay, West Florida Shelf, Pamlico Sound, Chesapeake Bay, and Narragansett Bay will be presented and explored for suitability for ocean acidification modeling purposes.

  11. Results from the ESA SREM monitors and comparison with existing radiation belt models

    NASA Astrophysics Data System (ADS)

    Evans, H. D. R.; Bühler, P.; Hajdas, W.; Daly, E. J.; Nieminen, P.; Mohammadzadeh, A.

    2008-11-01

    The Standard Radiation Environment Monitor (SREM) is a simple particle detector developed for wide application on ESA satellites. It measures high-energy protons and electrons of the space environment with a 20° angular resolution and limited spectral information. Of the ten SREMs that have been manufactured, four have so far flown. The first model on STRV-1c functioned well until an early spacecraft failure. The other three are on-board, the ESA spacecraft INTEGRAL, ROSETTA and PROBA-1. Another model is flying on GIOVE-B, launched in April 2008 with three L-2 science missions to follow: both Herschel and Planck in 2008, and GAIA in 2011). The diverse orbits of these spacecraft and the common calibration of the monitors provides a unique dataset covering a wide range of B-L∗ space, providing a direct comparison of the radiation levels in the belts at different locations, and the effects of geomagnetic shielding. Data from the PROBA/SREM and INTEGRAL/IREM are compared with existing radiation belt models.

  12. Frequency domain modeling and dynamic characteristics evaluation of existing wind turbine systems

    NASA Astrophysics Data System (ADS)

    Chiang, Chih-Hung; Yu, Chih-Peng

    2016-04-01

    It is quite well accepted that frequency domain procedures are suitable for the design and dynamic analysis of wind turbine structures, especially for floating offshore wind turbines, since random wind loads and wave induced motions are most likely simulated in the frequency domain. This paper presents specific applications of an effective frequency domain scheme to the linear analysis of wind turbine structures in which a 1-D spectral element was developed based on the axially-loaded member. The solution schemes are summarized for the spectral analyses of the tower, the blades, and the combined system with selected frequency-dependent coupling effect from foundation-structure interactions. Numerical examples demonstrate that the modal frequencies obtained using spectral-element models are in good agreement with those found in the literature. A 5-element mono-pile model results in less than 0.3% deviation from an existing 160-element model. It is preliminarily concluded that the proposed scheme is relatively efficient in performing quick verification for test data obtained from the on-site vibration measurement using the microwave interferometer.

  13. Using quantitative structure-activity relationship modeling to quantitatively predict the developmental toxicity of halogenated azole compounds.

    PubMed

    Craig, Evisabel A; Wang, Nina Ching; Zhao, Q Jay

    2014-07-01

    Developmental toxicity is a relevant endpoint for the comprehensive assessment of human health risk from chemical exposure. However, animal developmental toxicity data remain unavailable for many environmental contaminants due to the complexity and cost of these types of analyses. Here we describe an approach that uses quantitative structure-activity relationship modeling as an alternative methodology to fill data gaps in the developmental toxicity profile of certain halogenated compounds. Chemical information was obtained and curated using the OECD Quantitative Structure-Activity Relationship Toolbox, version 3.0. Data from 35 curated compounds were analyzed via linear regression to build the predictive model, which has an R(2) of 0.79 and a Q(2) of 0.77. The applicability domain (AD) was defined by chemical category and structural similarity. Seven halogenated chemicals that fit the AD but are not part of the training set were employed for external validation purposes. Our model predicted lowest observed adverse effect level values with a maximal threefold deviation from the observed experimental values for all chemicals that fit the AD. The good predictability of our model suggests that this method may be applicable to the analysis of qualifying compounds whenever developmental toxicity information is lacking or incomplete for risk assessment considerations.

  14. Ammonia quantitative analysis model based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model

    PubMed Central

    Ma, Rongfei

    2015-01-01

    In this paper, ammonia quantitative analysis based on miniaturized Al ionization gas sensor and non-linear bistable dynamic model was proposed. Al plate anodic gas-ionization sensor was used to obtain the current-voltage (I-V) data. Measurement data was processed by non-linear bistable dynamics model. Results showed that the proposed method quantitatively determined ammonia concentrations. PMID:25975362

  15. Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.

    NASA Astrophysics Data System (ADS)

    Knudsen, Thomas; Aasbjerg Nielsen, Allan

    2013-04-01

    through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".

  16. Future observational and modelling needs identified on the basis of the existing shelf data

    NASA Astrophysics Data System (ADS)

    Berlamont, J.; Radach, G.; Becker, G.; Colijn, F.; Gekeler, J.; Laane, R. W. P. M.; Monbaliu, J.; Prandle, D.; Sündermann, J.; van Raaphorst, W.; Yu, C. S.

    1996-09-01

    NOWESP has compiled a vast quantity of existing data from the north-west European shelf. Such a focused task is without precedence. It is now highly recommended that one, or a few national and international data centres or agencies should be chosen and properly supported by the E. U., where all available observational data, incl. the NOWESP data, are collected, stored, regularly updated by the providers of the data, and made available to the researchers. International agreement must be reached on the quality control procedures and quality standards for data to be stored in these data bases. Proper arrangements should be made to preserve the economic value of the data for their “owners” without compromising use of the data by researchers or duplicating data collecting efforts. The Continental Shelf data needed are concentration fields of temperature, salinity, nutrients, suspended matter and chlorophyll, which can be called “climatological” fields. For this purpose at least one monthly survey on the whole European shelf is needed at least during five years, with a proper spatial resolution, e. g. 1‡ by 1‡, and at least in those areas where climatological data are now totally lacking. From the modelling point of view an alternative would be the availability of data from sufficiently representative fixed stations on the shelf, with weekly sampling for several years. It should be realized that there are hardly any data available on the shelf boundaries. Therefore, one should consider a European effort to set up a limited network of stations, especially at the shelf edge, where a limited, selected set of parameters is measured on a long-term basis (time series) for use in modelling and for interpreting long-term natural changes in the marine environment and changes due to human interference (eutrophication, pollutants, climatic changes, biodiversity changes). The E. U. could foster coordination of nationally organized measuring campaigns in Europe

  17. Physically based estimation of soil water retention from textural data: General framework, new models, and streamlined existing models

    USGS Publications Warehouse

    Nimmo, J.R.; Herkelrath, W.N.; Laguna, Luna A.M.

    2007-01-01

    Numerous models are in widespread use for the estimation of soil water retention from more easily measured textural data. Improved models are needed for better prediction and wider applicability. We developed a basic framework from which new and existing models can be derived to facilitate improvements. Starting from the assumption that every particle has a characteristic dimension R associated uniquely with a matric pressure ?? and that the form of the ??-R relation is the defining characteristic of each model, this framework leads to particular models by specification of geometric relationships between pores and particles. Typical assumptions are that particles are spheres, pores are cylinders with volume equal to the associated particle volume times the void ratio, and that the capillary inverse proportionality between radius and matric pressure is valid. Examples include fixed-pore-shape and fixed-pore-length models. We also developed alternative versions of the model of Arya and Paris that eliminate its interval-size dependence and other problems. The alternative models are calculable by direct application of algebraic formulas rather than manipulation of data tables and intermediate results, and they easily combine with other models (e.g., incorporating structural effects) that are formulated on a continuous basis. Additionally, we developed a family of models based on the same pore geometry as the widely used unsaturated hydraulic conductivity model of Mualem. Predictions of measurements for different suitable media show that some of the models provide consistently good results and can be chosen based on ease of calculations and other factors. ?? Soil Science Society of America. All rights reserved.

  18. Quantitative study of osteoporosis model based on synchrotron radiation.

    PubMed

    Xu, Wangyang; Xu, Jun; Zhao, Jun; Sun, Jianqi

    2015-01-01

    To investigate the changes of different periods of primary osteoporosis, we made quantitative analysis of osteoporosis using synchrotron radiation computed tomography (SRCT), together with histomorphometry analysis and finite element analysis (FEA). Tibias, femurs and lumbar vertebras were dissected from sham-ovariectomy rats and ovariectomized rats suffering from osteoporosis at certain time points. The samples were scanned by SRCT and then FEA was applied based on reconstructed slices. Histomorphometry analysis showed that the structure of some trabecular in osteoporosis degraded as the bone volume decreased, for femurs, the bone volume fraction (BV/TV) decreased from 69% to 43%. That led to the increase of the thickness of trabecular separation (from 45.05μm to 97.09μm) and the reduction of the number of trabecular (from 7.99 mm(-1) to 5.97mm(-1)). Simulation of various mechanical tests indicated that, with the exacerbation of osteoporosis, the bones' ability of resistance to compression, bending and torsion gradually became weaker. The compression stiffness decreased from 1770.96 Fμm(-1) to 697.41 Fμm(-1), and it matched the histomorphometry analysis. This study suggested that the combination of both analysis could quantitatively analyze the bone strength in good accuracy. PMID:26737752

  19. [A multivariate nonlinear model for quantitative analysis in laser-induced breakdown spectroscopy].

    PubMed

    Chen, Xing-Long; Fu, Hong-Bo; Wang, Jing-Ge; Ni, Zhi-Bo; He, Wen-Gan; Xu, Jun; Rao Rui-zhong; Dong, Rui-Zhong

    2014-11-01

    Most quantitative models used in laser-induced breakdown spectroscopy (LIBS) are based on the hypothesis that laser-induced plasma approaches the state of local thermal equilibrium (LTE). However, the local equilibrium is possible only at a specific time segment during the evolution. As the populations of each energy level does not follow Boltzmann distribution in non-LTE condition, those quantitative models using single spectral line would be inaccurate. A multivariate nonlinear model, in which the LTE is not required, was proposed in this article to reduce the signal fluctuation and improve the accuracy of quantitative analysis. This multivariate nonlinear model was compared with the internal calibration model which is based on the LTE condition. The content of Mn in steel samples was determined by using the two models, respectively. A minor error and a minor relative standard deviation (RSD) were observed in multivariate nonlinear model. This result demonstrates that multivariate nonlinear model can improve measurement accuracy and repeatability.

  20. Quantitative and predictive model of kinetic regulation by E. coli TPP riboswitches

    PubMed Central

    Guedich, Sondés; Puffer-Enders, Barbara; Baltzinger, Mireille; Hoffmann, Guillaume; Da Veiga, Cyrielle; Jossinet, Fabrice; Thore, Stéphane; Bec, Guillaume; Ennifar, Eric; Burnouf, Dominique; Dumas, Philippe

    2016-01-01

    ABSTRACT Riboswitches are non-coding elements upstream or downstream of mRNAs that, upon binding of a specific ligand, regulate transcription and/or translation initiation in bacteria, or alternative splicing in plants and fungi. We have studied thiamine pyrophosphate (TPP) riboswitches regulating translation of thiM operon and transcription and translation of thiC operon in E. coli, and that of THIC in the plant A. thaliana. For all, we ascertained an induced-fit mechanism involving initial binding of the TPP followed by a conformational change leading to a higher-affinity complex. The experimental values obtained for all kinetic and thermodynamic parameters of TPP binding imply that the regulation by A. thaliana riboswitch is governed by mass-action law, whereas it is of kinetic nature for the two bacterial riboswitches. Kinetic regulation requires that the RNA polymerase pauses after synthesis of each riboswitch aptamer to leave time for TPP binding, but only when its concentration is sufficient. A quantitative model of regulation highlighted how the pausing time has to be linked to the kinetic rates of initial TPP binding to obtain an ON/OFF switch in the correct concentration range of TPP. We verified the existence of these pauses and the model prediction on their duration. Our analysis also led to quantitative estimates of the respective efficiency of kinetic and thermodynamic regulations, which shows that kinetically regulated riboswitches react more sharply to concentration variation of their ligand than thermodynamically regulated riboswitches. This rationalizes the interest of kinetic regulation and confirms empirical observations that were obtained by numerical simulations. PMID:26932506

  1. Quantitative assessment of meteorological and tropospheric Zenith Hydrostatic Delay models

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Guo, Jiming; Chen, Ming; Shi, Junbo; Zhou, Lv

    2016-09-01

    Tropospheric delay has always been an important issue in GNSS/DORIS/VLBI/InSAR processing. Most commonly used empirical models for the determination of tropospheric Zenith Hydrostatic Delay (ZHD), including three meteorological models and two empirical ZHD models, are carefully analyzed in this paper. Meteorological models refer to UNB3m, GPT2 and GPT2w, while ZHD models include Hopfield and Saastamoinen. By reference to in-situ meteorological measurements and ray-traced ZHD values of 91 globally distributed radiosonde sites, over a four-years period from 2010 to 2013, it is found that there is strong correlation between errors of model-derived values and latitudes. Specifically, the Saastamoinen model shows a systematic error of about -3 mm. Therefore a modified Saastamoinen model is developed based on the "best average" refractivity constant, and is validated by radiosonde data. Among different models, the GPT2w and the modified Saastamoinen model perform the best. ZHD values derived from their combination have a mean bias of -0.1 mm and a mean RMS of 13.9 mm. Limitations of the present models are discussed and suggestions for further improvements are given.

  2. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and human health effect...

  3. A quantitative risk-based model for reasoning over critical system properties

    NASA Technical Reports Server (NTRS)

    Feather, M. S.

    2002-01-01

    This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.

  4. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  5. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  6. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    EPA Science Inventory

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  7. Thermodynamic Modeling of a Solid Oxide Fuel Cell to Couple with an Existing Gas Turbine Engine Model

    NASA Technical Reports Server (NTRS)

    Brinson, Thomas E.; Kopasakis, George

    2004-01-01

    The Controls and Dynamics Technology Branch at NASA Glenn Research Center are interested in combining a solid oxide fuel cell (SOFC) to operate in conjunction with a gas turbine engine. A detailed engine model currently exists in the Matlab/Simulink environment. The idea is to incorporate a SOFC model within the turbine engine simulation and observe the hybrid system's performance. The fuel cell will be heated to its appropriate operating condition by the engine s combustor. Once the fuel cell is operating at its steady-state temperature, the gas burner will back down slowly until the engine is fully operating on the hot gases exhausted from the SOFC. The SOFC code is based on a steady-state model developed by The U.S. Department of Energy (DOE). In its current form, the DOE SOFC model exists in Microsoft Excel and uses Visual Basics to create an I-V (current-voltage) profile. For the project's application, the main issue with this model is that the gas path flow and fuel flow temperatures are used as input parameters instead of outputs. The objective is to create a SOFC model based on the DOE model that inputs the fuel cells flow rates and outputs temperature of the flow streams; therefore, creating a temperature profile as a function of fuel flow rate. This will be done by applying the First Law of Thermodynamics for a flow system to the fuel cell. Validation of this model will be done in two procedures. First, for a given flow rate the exit stream temperature will be calculated and compared to DOE SOFC temperature as a point comparison. Next, an I-V curve and temperature curve will be generated where the I-V curve will be compared with the DOE SOFC I-V curve. Matching I-V curves will suggest validation of the temperature curve because voltage is a function of temperature. Once the temperature profile is created and validated, the model will then be placed into the turbine engine simulation for system analysis.

  8. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level. PMID:15376934

  9. Quantitative Evaluation of Liver Fibrosis Using Multi-Rayleigh Model with Hypoechoic Component

    NASA Astrophysics Data System (ADS)

    Higuchi, Tatsuya; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    2013-07-01

    To realize a quantitative diagnosis method of liver fibrosis, we have been developing a modeling method for the probability density function of the echo amplitude. In our previous model, the approximation accuracy is insufficient in regions with hypoechoic tissue such as a nodule or a blood vessel. In this study, we examined a multi-Rayleigh model with three Rayleigh distributions, corresponding to the distribution of the echo amplitude from hypoechoic, normal, and fibrous tissue. We showed quantitatively that the proposed model can model the amplitude distribution of liver fibrosis echo data with hypoechoic tissue adequately using Kullback-Leibler (KL) divergence, which is an index of the difference between two probability distributions. We also found that fibrous indices can be estimated stably using the proposed model even if hypoechoic tissue is included in the region of interest. We conclude that the multi-Rayleigh model with three components can be used to evaluate the progress of liver fibrosis quantitatively.

  10. On the Non-Existence of Optimal Solutions and the Occurrence of "Degeneracy" in the CANDECOMP/PARAFAC Model

    ERIC Educational Resources Information Center

    Krijnen, Wim P.; Dijkstra, Theo K.; Stegeman, Alwin

    2008-01-01

    The CANDECOMP/PARAFAC (CP) model decomposes a three-way array into a prespecified number of "R" factors and a residual array by minimizing the sum of squares of the latter. It is well known that an optimal solution for CP need not exist. We show that if an optimal CP solution does not exist, then any sequence of CP factors monotonically decreasing…

  11. Global existence of the three-dimensional viscous quantum magnetohydrodynamic model

    SciTech Connect

    Yang, Jianwei; Ju, Qiangchang

    2014-08-15

    The global-in-time existence of weak solutions to the viscous quantum Magnetohydrodynamic equations in a three-dimensional torus with large data is proved. The global existence of weak solutions to the viscous quantum Magnetohydrodynamic equations is shown by using the Faedo-Galerkin method and weak compactness techniques.

  12. A Quantitative Causal Model Theory of Conditional Reasoning

    ERIC Educational Resources Information Center

    Fernbach, Philip M.; Erb, Christopher D.

    2013-01-01

    The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…

  13. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations.

  14. Towards the quantitative evaluation of visual attention models.

    PubMed

    Bylinskii, Z; DeGennaro, E M; Rajalingham, R; Ruda, H; Zhang, J; Tsotsos, J K

    2015-11-01

    Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations. PMID:25951756

  15. Quantitative Methods for Comparing Different Polyline Stream Network Models

    SciTech Connect

    Danny L. Anderson; Daniel P. Ames; Ping Yang

    2014-04-01

    Two techniques for exploring relative horizontal accuracy of complex linear spatial features are described and sample source code (pseudo code) is presented for this purpose. The first technique, relative sinuosity, is presented as a measure of the complexity or detail of a polyline network in comparison to a reference network. We term the second technique longitudinal root mean squared error (LRMSE) and present it as a means for quantitatively assessing the horizontal variance between two polyline data sets representing digitized (reference) and derived stream and river networks. Both relative sinuosity and LRMSE are shown to be suitable measures of horizontal stream network accuracy for assessing quality and variation in linear features. Both techniques have been used in two recent investigations involving extracting of hydrographic features from LiDAR elevation data. One confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes yielded better stream network delineations, based on sinuosity and LRMSE, when using LiDAR-derived DEMs. The other demonstrated a new method of delineating stream channels directly from LiDAR point clouds, without the intermediate step of deriving a DEM, showing that the direct delineation from LiDAR point clouds yielded an excellent and much better match, as indicated by the LRMSE.

  16. Detection of cardiomyopathy in an animal model using quantitative autoradiography

    SciTech Connect

    Kubota, K.; Som, P.; Oster, Z.H.; Brill, A.B.; Goodman, M.M.; Knapp, F.F. Jr.; Atkins, H.L.; Sole, M.J.

    1988-10-01

    A fatty acid analog (15-p-iodophenyl)-3,3 dimethyl-pentadecanoic acid (DMIPP) was studied in cardiomyopathic (CM) and normal age-matched Syrian hamsters. Dual tracer quantitative wholebody autoradiography (QARG) with DMIPP and 2-(/sup 14/C(U))-2-deoxy-2-fluoro-D-glucose (FDG) or with FDG and /sup 201/Tl enabled comparison of the uptake of a fatty acid and a glucose analog with the blood flow. These comparisons were carried out at the onset and mid-stage of the disease before congestive failure developed. Groups of CM and normal animals were treated with verapamil from the age of 26 days, before the onset of the disease for 41 days. In CM hearts, areas of decreased DMIPP uptake were seen. These areas were much larger than the decrease in uptake of FDG or /sup 201/Tl. In early CM only minimal changes in FDG or /sup 201/Tl uptake were observed as compared to controls. Treatment of CM-prone animals with verapamil prevented any changes in DMIPP, FDG, or /sup 201/Tl uptake. DMIPP seems to be a more sensitive indicator of early cardiomyopathic changes as compared to /sup 201/Tl or FDG. The trial of DMIPP and SPECT in the diagnosis of human disease, as well as for monitoring the effects of drugs which may prevent it seems to be warranted.

  17. Digital clocks: simple Boolean models can quantitatively describe circadian systems

    PubMed Central

    Akman, Ozgur E.; Watterson, Steven; Parton, Andrew; Binns, Nigel; Millar, Andrew J.; Ghazal, Peter

    2012-01-01

    The gene networks that comprise the circadian clock modulate biological function across a range of scales, from gene expression to performance and adaptive behaviour. The clock functions by generating endogenous rhythms that can be entrained to the external 24-h day–night cycle, enabling organisms to optimally time biochemical processes relative to dawn and dusk. In recent years, computational models based on differential equations have become useful tools for dissecting and quantifying the complex regulatory relationships underlying the clock's oscillatory dynamics. However, optimizing the large parameter sets characteristic of these models places intense demands on both computational and experimental resources, limiting the scope of in silico studies. Here, we develop an approach based on Boolean logic that dramatically reduces the parametrization, making the state and parameter spaces finite and tractable. We introduce efficient methods for fitting Boolean models to molecular data, successfully demonstrating their application to synthetic time courses generated by a number of established clock models, as well as experimental expression levels measured using luciferase imaging. Our results indicate that despite their relative simplicity, logic models can (i) simulate circadian oscillations with the correct, experimentally observed phase relationships among genes and (ii) flexibly entrain to light stimuli, reproducing the complex responses to variations in daylength generated by more detailed differential equation formulations. Our work also demonstrates that logic models have sufficient predictive power to identify optimal regulatory structures from experimental data. By presenting the first Boolean models of circadian circuits together with general techniques for their optimization, we hope to establish a new framework for the systematic modelling of more complex clocks, as well as other circuits with different qualitative dynamics. In particular, we

  18. Efficient Recycled Algorithms for Quantitative Trait Models on Phylogenies

    PubMed Central

    Hiscott, Gordon; Fox, Colin; Parry, Matthew; Bryant, David

    2016-01-01

    We present an efficient and flexible method for computing likelihoods for phenotypic traits on a phylogeny. The method does not resort to Monte Carlo computation but instead blends Felsenstein’s discrete character pruning algorithm with methods for numerical quadrature. It is not limited to Gaussian models and adapts readily to model uncertainty in the observed trait values. We demonstrate the framework by developing efficient algorithms for likelihood calculation and ancestral state reconstruction under Wright’s threshold model, applying our methods to a data set of trait data for extrafloral nectaries across a phylogeny of 839 Fabales species. PMID:27056412

  19. A quantitative model of plasma in Neptune's magnetosphere

    NASA Astrophysics Data System (ADS)

    Richardson, J. D.

    1993-07-01

    A model encompassing plasma transport and energy processes is applied to Neptune's magnetosphere. Starting with profiles of the neutral densities and the electron temperature, the model calculates the plasma density and ion temperature profiles. Good agreement between model results and observations is obtained for a neutral source of 5 x 10 exp 25/s if the diffusion coefficient is 10 exp -8 L3R(N)/2s, plasma is lost at a rate 1/3 that of the strong diffusion rate, and plasma subcorotates in the region outside Triton.

  20. Model-based resolution: applying the theory in quantitative microscopy.

    PubMed

    Santos, A; Young, I T

    2000-06-10

    Model-based image processing techniques have been proposed as a way to increase the resolution of optical microscopes. Here a model based on the microscope's point-spread function is analyzed, and the resolution limits achieved with a proposed goodness-of-fit criterion are quantified. Several experiments were performed to evaluate the possibilities and limitations of this method: (a) experiments with an ideal (diffraction-limited) microscope, (b) experiments with simulated dots and a real microscope, and (c) experiments with real dots acquired with a real microscope. The results show that a threefold increase over classical resolution (e.g., Rayleigh) is possible. These results can be affected by model misspecifications, whereas model corruption, as seen in the effect of Poisson noise, seems to be unimportant. This research can be considered to be preliminary with the final goal being the accurate measurement of various cytogenetic properties, such as gene distributions, in labeled preparations.

  1. A quantitative model of honey bee colony population dynamics.

    PubMed

    Khoury, David S; Myerscough, Mary R; Barron, Andrew B

    2011-01-01

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem. PMID:21533156

  2. A Quantitative Model of Honey Bee Colony Population Dynamics

    PubMed Central

    Khoury, David S.; Myerscough, Mary R.; Barron, Andrew B.

    2011-01-01

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem. PMID:21533156

  3. Quantitative phase-field model of alloy solidification.

    PubMed

    Echebarria, Blas; Folch, Roger; Karma, Alain; Plapp, Mathis

    2004-12-01

    We present a detailed derivation and thin interface analysis of a phase-field model that can accurately simulate microstructural pattern formation for low-speed directional solidification of a dilute binary alloy. This advance with respect to previous phase-field models is achieved by the addition of a phenomenological "antitrapping" solute current in the mass conservation relation [Phys. Rev. Lett. 87, 115701 (2001)

  4. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology

    PubMed Central

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian

    2012-01-01

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼ 10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 106, of normal HSCs. Radiobiologic estimates favor values > 106 for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  5. Quantitative modeling of chronic myeloid leukemia: insights from radiobiology.

    PubMed

    Radivoyevitch, Tomas; Hlatky, Lynn; Landaw, Julian; Sachs, Rainer K

    2012-05-10

    Mathematical models of chronic myeloid leukemia (CML) cell population dynamics are being developed to improve CML understanding and treatment. We review such models in light of relevant findings from radiobiology, emphasizing 3 points. First, the CML models almost all assert that the latency time, from CML initiation to diagnosis, is at most ∼10 years. Meanwhile, current radiobiologic estimates, based on Japanese atomic bomb survivor data, indicate a substantially higher maximum, suggesting longer-term relapses and extra resistance mutations. Second, different CML models assume different numbers, between 400 and 10(6), of normal HSCs. Radiobiologic estimates favor values>10(6) for the number of normal cells (often assumed to be the HSCs) that are at risk for a CML-initiating BCR-ABL translocation. Moreover, there is some evidence for an HSC dead-band hypothesis, consistent with HSC numbers being very different across different healthy adults. Third, radiobiologists have found that sporadic (background, age-driven) chromosome translocation incidence increases with age during adulthood. BCR-ABL translocation incidence increasing with age would provide a hitherto underanalyzed contribution to observed background adult-onset CML incidence acceleration with age, and would cast some doubt on stage-number inferences from multistage carcinogenesis models in general. PMID:22353999

  6. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    PubMed

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.

  7. Quantitative experimental modelling of fragmentation during explosive volcanism

    NASA Astrophysics Data System (ADS)

    Thordén Haug, Ø.; Galland, O.; Gisler, G.

    2012-04-01

    Phreatomagmatic eruptions results from the violent interaction between magma and an external source of water, such as ground water or a lake. This interaction causes fragmentation of the magma and/or the host rock, resulting in coarse-grained (lapilli) to very fine-grained (ash) material. The products of phreatomagmatic explosions are classically described by their fragment size distribution, which commonly follows power laws of exponent D. Such descriptive approach, however, considers the final products only and do not provide information on the dynamics of fragmentation. The aim of this contribution is thus to address the following fundamental questions. What are the physics that govern fragmentation processes? How fragmentation occurs through time? What are the mechanisms that produce power law fragment size distributions? And what are the scaling laws that control the exponent D? To address these questions, we performed a quantitative experimental study. The setup consists of a Hele-Shaw cell filled with a layer of cohesive silica flour, at the base of which a pulse of pressurized air is injected, leading to fragmentation of the layer of flour. The fragmentation process is monitored through time using a high-speed camera. By varying systematically the air pressure (P) and the thickness of the flour layer (h) we observed two morphologies of fragmentation: "lift off" where the silica flour above the injection inlet is ejected upwards, and "channeling" where the air pierces through the layer along sub-vertical conduit. By building a phase diagram, we show that the morphology is controlled by P/dgh, where d is the density of the flour and g is the gravitational acceleration. To quantify the fragmentation process, we developed a Matlab image analysis program, which calculates the number and sizes of the fragments, and so the fragment size distribution, during the experiments. The fragment size distributions are in general described by power law distributions of

  8. A quantitative risk model for early lifecycle decision making

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  9. Comprehensive Quantitative Model of Inner-Magnetosphere Dynamics

    NASA Technical Reports Server (NTRS)

    Wolf, Richard A.

    2002-01-01

    This report includes descriptions of papers, a thesis, and works still in progress which cover observations of space weather in the Earth's magnetosphere. The topics discussed include: 1) modelling of magnetosphere activity; 2) magnetic storms; 3) high energy electrons; and 4) plasmas.

  10. Quantitative Research: A Dispute Resolution Model for FTC Advertising Regulation.

    ERIC Educational Resources Information Center

    Richards, Jef I.; Preston, Ivan L.

    Noting the lack of a dispute mechanism for determining whether an advertising practice is truly deceptive without generating the costs and negative publicity produced by traditional Federal Trade Commission (FTC) procedures, this paper proposes a model based upon early termination of the issues through jointly commissioned behavioral research. The…

  11. Inference of Quantitative Models of Bacterial Promoters from Time-Series Reporter Gene Data

    PubMed Central

    Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde

    2015-01-01

    The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for

  12. Analysis of protein complexes through model-based biclustering of label-free quantitative AP-MS data

    PubMed Central

    Choi, Hyungwon; Kim, Sinae; Gingras, Anne-Claude; Nesvizhskii, Alexey I

    2010-01-01

    Affinity purification followed by mass spectrometry (AP-MS) has become a common approach for identifying protein–protein interactions (PPIs) and complexes. However, data analysis and visualization often rely on generic approaches that do not take advantage of the quantitative nature of AP-MS. We present a novel computational method, nested clustering, for biclustering of label-free quantitative AP-MS data. Our approach forms bait clusters based on the similarity of quantitative interaction profiles and identifies submatrices of prey proteins showing consistent quantitative association within bait clusters. In doing so, nested clustering effectively addresses the problem of overrepresentation of interactions involving baits proteins as compared with proteins only identified as preys. The method does not require specification of the number of bait clusters, which is an advantage against existing model-based clustering methods. We illustrate the performance of the algorithm using two published intermediate scale human PPI data sets, which are representative of the AP-MS data generated from mammalian cells. We also discuss general challenges of analyzing and interpreting clustering results in the context of AP-MS data. PMID:20571534

  13. Afference copy as a quantitative neurophysiological model for consciousness.

    PubMed

    Cornelis, Hugo; Coop, Allan D

    2014-06-01

    Consciousness is a topic of considerable human curiosity with a long history of philosophical analysis and debate. We consider there is nothing particularly complicated about consciousness when viewed as a necessary process of the vertebrate nervous system. Here, we propose a physiological "explanatory gap" is created during each present moment by the temporal requirements of neuronal activity. The gap extends from the time exteroceptive and proprioceptive stimuli activate the nervous system until they emerge into consciousness. During this "moment", it is impossible for an organism to have any conscious knowledge of the ongoing evolution of its environment. In our schematic model, a mechanism of "afference copy" is employed to bridge the explanatory gap with consciously experienced percepts. These percepts are fabricated from the conjunction of the cumulative memory of previous relevant experience and the given stimuli. They are structured to provide the best possible prediction of the expected content of subjective conscious experience likely to occur during the period of the gap. The model is based on the proposition that the neural circuitry necessary to support consciousness is a product of sub/preconscious reflexive learning and recall processes. Based on a review of various psychological and neurophysiological findings, we develop a framework which contextualizes the model and briefly discuss further implications. PMID:25012715

  14. Quantitative Risk Modeling of Fire on the International Space Station

    NASA Technical Reports Server (NTRS)

    Castillo, Theresa; Haught, Megan

    2014-01-01

    The International Space Station (ISS) Program has worked to prevent fire events and to mitigate their impacts should they occur. Hardware is designed to reduce sources of ignition, oxygen systems are designed to control leaking, flammable materials are prevented from flying to ISS whenever possible, the crew is trained in fire response, and fire response equipment improvements are sought out and funded. Fire prevention and mitigation are a top ISS Program priority - however, programmatic resources are limited; thus, risk trades are made to ensure an adequate level of safety is maintained onboard the ISS. In support of these risk trades, the ISS Probabilistic Risk Assessment (PRA) team has modeled the likelihood of fire occurring in the ISS pressurized cabin, a phenomenological event that has never before been probabilistically modeled in a microgravity environment. This paper will discuss the genesis of the ISS PRA fire model, its enhancement in collaboration with fire experts, and the results which have informed ISS programmatic decisions and will continue to be used throughout the life of the program.

  15. Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.

    PubMed

    Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  16. Software applications toward quantitative metabolic flux analysis and modeling.

    PubMed

    Dandekar, Thomas; Fieselmann, Astrid; Majeed, Saman; Ahmed, Zeeshan

    2014-01-01

    Metabolites and their pathways are central for adaptation and survival. Metabolic modeling elucidates in silico all the possible flux pathways (flux balance analysis, FBA) and predicts the actual fluxes under a given situation, further refinement of these models is possible by including experimental isotopologue data. In this review, we initially introduce the key theoretical concepts and different analysis steps in the modeling process before comparing flux calculation and metabolite analysis programs such as C13, BioOpt, COBRA toolbox, Metatool, efmtool, FiatFlux, ReMatch, VANTED, iMAT and YANA. Their respective strengths and limitations are discussed and compared to alternative software. While data analysis of metabolites, calculation of metabolic fluxes, pathways and their condition-specific changes are all possible, we highlight the considerations that need to be taken into account before deciding on a specific software. Current challenges in the field include the computation of large-scale networks (in elementary mode analysis), regulatory interactions and detailed kinetics, and these are discussed in the light of powerful new approaches.

  17. Afference copy as a quantitative neurophysiological model for consciousness.

    PubMed

    Cornelis, Hugo; Coop, Allan D

    2014-06-01

    Consciousness is a topic of considerable human curiosity with a long history of philosophical analysis and debate. We consider there is nothing particularly complicated about consciousness when viewed as a necessary process of the vertebrate nervous system. Here, we propose a physiological "explanatory gap" is created during each present moment by the temporal requirements of neuronal activity. The gap extends from the time exteroceptive and proprioceptive stimuli activate the nervous system until they emerge into consciousness. During this "moment", it is impossible for an organism to have any conscious knowledge of the ongoing evolution of its environment. In our schematic model, a mechanism of "afference copy" is employed to bridge the explanatory gap with consciously experienced percepts. These percepts are fabricated from the conjunction of the cumulative memory of previous relevant experience and the given stimuli. They are structured to provide the best possible prediction of the expected content of subjective conscious experience likely to occur during the period of the gap. The model is based on the proposition that the neural circuitry necessary to support consciousness is a product of sub/preconscious reflexive learning and recall processes. Based on a review of various psychological and neurophysiological findings, we develop a framework which contextualizes the model and briefly discuss further implications.

  18. A quantitative assessment of torque-transducer models for magnetoreception

    PubMed Central

    Winklhofer, Michael; Kirschvink, Joseph L.

    2010-01-01

    Although ferrimagnetic material appears suitable as a basis of magnetic field perception in animals, it is not known by which mechanism magnetic particles may transduce the magnetic field into a nerve signal. Provided that magnetic particles have remanence or anisotropic magnetic susceptibility, an external magnetic field will exert a torque and may physically twist them. Several models of such biological magnetic-torque transducers on the basis of magnetite have been proposed in the literature. We analyse from first principles the conditions under which they are viable. Models based on biogenic single-domain magnetite prove both effective and efficient, irrespective of whether the magnetic structure is coupled to mechanosensitive ion channels or to an indirect transduction pathway that exploits the strayfield produced by the magnetic structure at different field orientations. On the other hand, torque-detector models that are based on magnetic multi-domain particles in the vestibular organs turn out to be ineffective. Also, we provide a generic classification scheme of torque transducers in terms of axial or polar output, within which we discuss the results from behavioural experiments conducted under altered field conditions or with pulsed fields. We find that the common assertion that a magnetoreceptor based on single-domain magnetite could not form the basis for an inclination compass does not always hold. PMID:20086054

  19. Quantitative Modeling of Cerenkov Light Production Efficiency from Medical Radionuclides

    PubMed Central

    Beattie, Bradley J.; Thorek, Daniel L. J.; Schmidtlein, Charles R.; Pentlow, Keith S.; Humm, John L.; Hielscher, Andreas H.

    2012-01-01

    There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use. PMID:22363636

  20. Concentric Coplanar Capacitive Sensor System with Quantitative Model

    NASA Technical Reports Server (NTRS)

    Bowler, Nicola (Inventor); Chen, Tianming (Inventor)

    2014-01-01

    A concentric coplanar capacitive sensor includes a charged central disc forming a first electrode, an outer annular ring coplanar with and outer to the charged central disc, the outer annular ring forming a second electrode, and a gap between the charged central disc and the outer annular ring. The first electrode and the second electrode may be attached to an insulative film. A method provides for determining transcapacitance between the first electrode and the second electrode and using the transcapacitance in a model that accounts for a dielectric test piece to determine inversely the properties of the dielectric test piece.

  1. Quantitative description of realistic wealth distributions by kinetic trading models

    NASA Astrophysics Data System (ADS)

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x-(1+α) at the high end, where, in general, α is greater than 1 (Pareto’s law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets.

  2. Quantitative model of EBIC for CdTe

    NASA Astrophysics Data System (ADS)

    Haney, Paul; Yoon, Heayoung; Koirala, Prakash; Collins, Robert; Zhitenev, Nikolai

    2015-03-01

    Electron beam induced current (EBIC) is a powerful characterization technique which offers the high spatial resolution needed to study polycrystalline solar cells. In an EBIC experiment, a beam of high energy electrons excites electron-hole pairs, some fraction of which are collected by contacts. Ideally, an EBIC measurement reflects the spatially resolved quantum efficiency of the device. However, experiments on polycrystalline CdTe solar cells reveal that the EBIC collection efficiency is substantially lower than the quantum efficiency of the device under optical excitation. In order to reliably extract intrinsic material properties from EBIC signals, this difference must be reconciled. Two important differences between an EBIC experiment and normal device operation are: 1. the high generation rate density associated with the electron beam, and 2. the substantial effect of the exposed surface in an EBIC experiment. By developing numerical and analytical models which account for both of these effects, the difference in the material response under EBIC and normal device operation conditions can be understood. Comparison between the model and experiment show good agreement between quantities such as maximum EBIC collection efficiency versus charge generation rate.

  3. Existence, uniqueness and stability of positive periodic solution for a nonlinear prey-competition model with delays

    NASA Astrophysics Data System (ADS)

    Chen, Fengde; Xie, Xiangdong; Shi, Jinlin

    2006-10-01

    A nonlinear periodic predator-prey model with m-preys and (n-m)-predators and delays is proposed in this paper, which can be seen as the modification of the traditional Lotka-Volterra prey-competition model. Sufficient conditions which guarantee the existence of a unique globally attractive positive periodic solution of the system are obtained.

  4. A Key Challenge in Global HRM: Adding New Insights to Existing Expatriate Spouse Adjustment Models

    ERIC Educational Resources Information Center

    Gupta, Ritu; Banerjee, Pratyush; Gaur, Jighyasu

    2012-01-01

    This study is an attempt to strengthen the existing knowledge about factors affecting the adjustment process of the trailing expatriate spouse and the subsequent impact of any maladjustment or expatriate failure. We conducted a qualitative enquiry using grounded theory methodology with 26 Indian spouses who had to deal with their partner's…

  5. [Establishment of simultaneous quantitative model of five alkaloids from Corydalis Rhizoma by near-infrared spectrometry].

    PubMed

    Yang, Li-xin; Zhang, Yong-xin; Feng, Wei-hong; Li, Chun

    2015-10-01

    This paper established a near-infrared spectroscopy quantitative model for simultaneous quantitative analysis of coptisine hydrochloride, dehydrocorydaline, tetrahydropalmatine, corydaline and glaucine in Corydalis Rhizoma. Firstly, the chemical values of the five components in Corydalis Rhizoma were determined by the reversed-phase high performance liquid chromatography (RP-HPLC) with UV detection. Then, the quantitative calibration model was established and optimized by fourier transformation near-infrared spectroscopy (NIRS) combined with partial least square (PLS) regression. The calibration model was evaluated by correlation coefficient (r), the root-mean-square error of calibration (RMSEC) and the root mean square of cross-validation (RMSECV) of the calibration model, as well as the correlation coefficient (r) and the root mean square of prediction (RMSEP) of prediction model. For the quantitative calibration model, the r, RMSEC and RMSECV of coptisine hydrochloride, dehydrocorydaline, tetrahydropalmatine, corydaline and glaucine were 0.941 0, 0.972 7, 0.964 3, 0.978 1, 0.979 9; 0.006 7, 0.003 5, 0.005 9, 0.002 8, 0.005 9; and 0.015, 0.011, 0.020, 0.010 and 0.022, respectively. For the prediction model, the r and RMSEP of the five components were 0.916 6, 0.942 9, 0.943 6, 0.916 7, 0.914 5; and 0.009, 0.006 6, 0.007 5, 0.006 9 and 0.011, respectively. The established near-infrared spectroscopy quantitative model is relatively stable, accurate and reliable for the simultaneous quantitative analysis of the five alkaloids, and is expected to be used for the rapid determination of the five components in crude drug of Corydalis Rhizoma.

  6. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the exemplar…

  7. Global existence of solutions and uniform persistence of a diffusive predator-prey model with prey-taxis

    NASA Astrophysics Data System (ADS)

    Wu, Sainan; Shi, Junping; Wu, Boying

    2016-04-01

    This paper proves the global existence and boundedness of solutions to a general reaction-diffusion predator-prey system with prey-taxis defined on a smooth bounded domain with no-flux boundary condition. The result holds for domains in arbitrary spatial dimension and small prey-taxis sensitivity coefficient. This paper also proves the existence of a global attractor and the uniform persistence of the system under some additional conditions. Applications to models from ecology and chemotaxis are discussed.

  8. A quantitative model of the biogeochemical transport of iodine

    NASA Astrophysics Data System (ADS)

    Weng, H.; Ji, Z.; Weng, J.

    2010-12-01

    Iodine deficiency disorders (IDD) are among the world’s most prevalent public health problems yet preventable by dietary iodine supplements. To better understand the biogeochemical behavior of iodine and to explore safer and more efficient ways of iodine supplementation as alternatives to iodized salt, we studied the behavior of iodine as it is absorbed, accumulated and released by plants. Using Chinese cabbage as a model system and the 125I tracing technique, we established that plants uptake exogenous iodine from soil, most of which are transported to the stem and leaf tissue. The level of absorption of iodine by plants is dependent on the iodine concentration in soil, as well as the soil types that have different iodine-adsorption capacity. The leaching experiment showed that the remainder soil content of iodine after leaching is determined by the iodine-adsorption ability of the soil and the pH of the leaching solution, but not the volume of leaching solution. Iodine in soil and plants can also be released to the air via vaporization in a concentration-dependent manner. This study provides a scientific basis for developing new methods to prevent IDD through iodized vegetable production.

  9. Quantitative nonlinearity analysis of model-scale jet noise

    NASA Astrophysics Data System (ADS)

    Miller, Kyle G.; Reichman, Brent O.; Gee, Kent L.; Neilsen, Tracianne B.; Atchley, Anthony A.

    2015-10-01

    The effects of nonlinearity on the power spectrum of jet noise can be directly compared with those of atmospheric absorption and geometric spreading through an ensemble-averaged, frequency-domain version of the generalized Burgers equation (GBE) [B. O. Reichman et al., J. Acoust. Soc. Am. 136, 2102 (2014)]. The rate of change in the sound pressure level due to the nonlinearity, in decibels per jet nozzle diameter, is calculated using a dimensionless form of the quadspectrum of the pressure and the squared-pressure waveforms. In this paper, this formulation is applied to atmospheric propagation of a spherically spreading, initial sinusoid and unheated model-scale supersonic (Mach 2.0) jet data. The rate of change in level due to nonlinearity is calculated and compared with estimated effects due to absorption and geometric spreading. Comparing these losses with the change predicted due to nonlinearity shows that absorption and nonlinearity are of similar magnitude in the geometric far field, where shocks are present, which causes the high-frequency spectral shape to remain unchanged.

  10. Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology.

    PubMed

    Metzger, Gregory J; Kalavagunta, Chaitanya; Spilseth, Benjamin; Bolan, Patrick J; Li, Xiufeng; Hutter, Diane; Nam, Jung W; Johnson, Andrew D; Henriksen, Jonathan C; Moench, Laura; Konety, Badrinath; Warlick, Christopher A; Schmechel, Stephen C; Koopmeiners, Joseph S

    2016-06-01

    Purpose To develop multiparametric magnetic resonance (MR) imaging models to generate a quantitative, user-independent, voxel-wise composite biomarker score (CBS) for detection of prostate cancer by using coregistered correlative histopathologic results, and to compare performance of CBS-based detection with that of single quantitative MR imaging parameters. Materials and Methods Institutional review board approval and informed consent were obtained. Patients with a diagnosis of prostate cancer underwent multiparametric MR imaging before surgery for treatment. All MR imaging voxels in the prostate were classified as cancer or noncancer on the basis of coregistered histopathologic data. Predictive models were developed by using more than one quantitative MR imaging parameter to generate CBS maps. Model development and evaluation of quantitative MR imaging parameters and CBS were performed separately for the peripheral zone and the whole gland. Model accuracy was evaluated by using the area under the receiver operating characteristic curve (AUC), and confidence intervals were calculated with the bootstrap procedure. The improvement in classification accuracy was evaluated by comparing the AUC for the multiparametric model and the single best-performing quantitative MR imaging parameter at the individual level and in aggregate. Results Quantitative T2, apparent diffusion coefficient (ADC), volume transfer constant (K(trans)), reflux rate constant (kep), and area under the gadolinium concentration curve at 90 seconds (AUGC90) were significantly different between cancer and noncancer voxels (P < .001), with ADC showing the best accuracy (peripheral zone AUC, 0.82; whole gland AUC, 0.74). Four-parameter models demonstrated the best performance in both the peripheral zone (AUC, 0.85; P = .010 vs ADC alone) and whole gland (AUC, 0.77; P = .043 vs ADC alone). Individual-level analysis showed statistically significant improvement in AUC in 82% (23 of 28) and 71% (24 of 34

  11. Existence and analyticity of eigenvalues of a two-channel molecular resonance model

    NASA Astrophysics Data System (ADS)

    Lakaev, S. N.; Latipov, Sh. M.

    2011-12-01

    We consider a family of operators Hγμ(k), k ∈ mathbb{T}^d := (-π,π]d, associated with the Hamiltonian of a system consisting of at most two particles on a d-dimensional lattice ℤd, interacting via both a pair contact potential (μ > 0) and creation and annihilation operators (γ > 0). We prove the existence of a unique eigenvalue of Hγμ(k), k ∈ mathbb{T}^d , or its absence depending on both the interaction parameters γ,μ ≥ 0 and the system quasimomentum k ∈ mathbb{T}^d . We show that the corresponding eigenvector is analytic. We establish that the eigenvalue and eigenvector are analytic functions of the quasimomentum k ∈ mathbb{T}^d in the existence domain G ⊂ mathbb{T}^d.

  12. Existence and large time behavior for a stochastic model of modified magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Razafimandimby, Paul André; Sango, Mamadou

    2015-10-01

    In this paper, we study a system of nonlinear stochastic partial differential equations describing the motion of turbulent non-Newtonian media in the presence of fluctuating magnetic field. The system is basically obtained by a coupling of the dynamical equations of a non-Newtonian fluids having p-structure and the Maxwell equations. We mainly show the existence of weak martingale solutions and their exponential decay when time goes to infinity.

  13. Using Item-Type Performance Covariance to Improve the Skill Model of an Existing Tutor

    ERIC Educational Resources Information Center

    Pavlik, Philip I., Jr.; Cen, Hao; Wu, Lili; Koedinger, Kenneth R.

    2008-01-01

    Using data from an existing pre-algebra computer-based tutor, we analyzed the covariance of item-types with the goal of describing a more effective way to assign skill labels to item-types. Analyzing covariance is important because it allows us to place the skills in a related network in which we can identify the role each skill plays in learning…

  14. Quantitative plant resistance in cultivar mixtures: wheat yellow rust as a modeling case study.

    PubMed

    Sapoukhina, Natalia; Paillard, Sophie; Dedryver, Françoise; de Vallavieille-Pope, Claude

    2013-11-01

    Unlike qualitative plant resistance, which confers immunity to disease, quantitative resistance confers only a reduction in disease severity and this can be nonspecific. Consequently, the outcome of its deployment in cultivar mixtures is not easy to predict, as on the one hand it may reduce the heterogeneity of the mixture, but on the other it may induce competition between nonspecialized strains of the pathogen. To clarify the principles for the successful use of quantitative plant resistance in disease management, we built a parsimonious model describing the dynamics of competing pathogen strains spreading through a mixture of cultivars carrying nonspecific quantitative resistance. Using the parameterized model for a wheat-yellow rust system, we demonstrate that a more effective use of quantitative resistance in mixtures involves reinforcing the effect of the highly resistant cultivars rather than replacing them. We highlight the fact that the judicious deployment of the quantitative resistance in two- or three-component mixtures makes it possible to reduce disease severity using only small proportions of the highly resistant cultivar. Our results provide insights into the effects on pathogen dynamics of deploying quantitative plant resistance, and can provide guidance for choosing appropriate associations of cultivars and optimizing diversification strategies.

  15. Enabling robust quantitative readout in an equipment-free model of device development

    PubMed Central

    Fu, Elain

    2014-01-01

    A critical constraint in the design of appropriate medical devices for the lowest-resource settings is the lack of access to maintenance or repair on instrumentation. There are numerous point-of-care applications for which quantitative readout would have clinical utility. Thus, a challenge to the device developer is to enable quantitative device readout in an equipment-free model that is appropriate for use in even the lowest-resource settings. Paper microfluidics has great potential for enabling equipment-free devices that are very low-cost, operable by minimally-trained users, and provide quantitative readout. The focus of this critical review is to describe the work, starting several decades ago and continuing to the present, to enable assays with quantitative readout in a fully-disposable device. PMID:25089298

  16. Enabling robust quantitative readout in an equipment-free model of device development.

    PubMed

    Fu, Elain

    2014-10-01

    A critical constraint in the design of appropriate medical devices for the lowest-resource settings is the lack of access to maintenance or repair on instrumentation. There are numerous point-of-care applications for which quantitative readout would have clinical utility. Thus, a challenge to the device developer is to enable quantitative device readout in an equipment-free model that is appropriate for use in even the lowest-resource settings. Paper microfluidics has great potential for enabling equipment-free devices that are low-cost, operable by minimally-trained users, and provide quantitative readout. The focus of this critical review is to describe the work, starting several decades ago and continuing to the present, to enable assays with quantitative readout in a fully-disposable device. PMID:25089298

  17. Modeling approaches for qualitative and semi-quantitative analysis of cellular signaling networks

    PubMed Central

    2013-01-01

    A central goal of systems biology is the construction of predictive models of bio-molecular networks. Cellular networks of moderate size have been modeled successfully in a quantitative way based on differential equations. However, in large-scale networks, knowledge of mechanistic details and kinetic parameters is often too limited to allow for the set-up of predictive quantitative models. Here, we review methodologies for qualitative and semi-quantitative modeling of cellular signal transduction networks. In particular, we focus on three different but related formalisms facilitating modeling of signaling processes with different levels of detail: interaction graphs, logical/Boolean networks, and logic-based ordinary differential equations (ODEs). Albeit the simplest models possible, interaction graphs allow the identification of important network properties such as signaling paths, feedback loops, or global interdependencies. Logical or Boolean models can be derived from interaction graphs by constraining the logical combination of edges. Logical models can be used to study the basic input–output behavior of the system under investigation and to analyze its qualitative dynamic properties by discrete simulations. They also provide a suitable framework to identify proper intervention strategies enforcing or repressing certain behaviors. Finally, as a third formalism, Boolean networks can be transformed into logic-based ODEs enabling studies on essential quantitative and dynamic features of a signaling network, where time and states are continuous. We describe and illustrate key methods and applications of the different modeling formalisms and discuss their relationships. In particular, as one important aspect for model reuse, we will show how these three modeling approaches can be combined to a modeling pipeline (or model hierarchy) allowing one to start with the simplest representation of a signaling network (interaction graph), which can later be refined to

  18. Modeling approaches for qualitative and semi-quantitative analysis of cellular signaling networks.

    PubMed

    Samaga, Regina; Klamt, Steffen

    2013-01-01

    A central goal of systems biology is the construction of predictive models of bio-molecular networks. Cellular networks of moderate size have been modeled successfully in a quantitative way based on differential equations. However, in large-scale networks, knowledge of mechanistic details and kinetic parameters is often too limited to allow for the set-up of predictive quantitative models.Here, we review methodologies for qualitative and semi-quantitative modeling of cellular signal transduction networks. In particular, we focus on three different but related formalisms facilitating modeling of signaling processes with different levels of detail: interaction graphs, logical/Boolean networks, and logic-based ordinary differential equations (ODEs). Albeit the simplest models possible, interaction graphs allow the identification of important network properties such as signaling paths, feedback loops, or global interdependencies. Logical or Boolean models can be derived from interaction graphs by constraining the logical combination of edges. Logical models can be used to study the basic input-output behavior of the system under investigation and to analyze its qualitative dynamic properties by discrete simulations. They also provide a suitable framework to identify proper intervention strategies enforcing or repressing certain behaviors. Finally, as a third formalism, Boolean networks can be transformed into logic-based ODEs enabling studies on essential quantitative and dynamic features of a signaling network, where time and states are continuous.We describe and illustrate key methods and applications of the different modeling formalisms and discuss their relationships. In particular, as one important aspect for model reuse, we will show how these three modeling approaches can be combined to a modeling pipeline (or model hierarchy) allowing one to start with the simplest representation of a signaling network (interaction graph), which can later be refined to logical

  19. Existence and time-discretization for the finite-strain Souza-Auricchio constitutive model for shape-memory alloys

    NASA Astrophysics Data System (ADS)

    Frigeri, Sergio; Stefanelli, Ulisse

    2012-01-01

    We prove the global existence of solutions for a shape-memory alloys constitutive model at finite strains. The model has been presented in Evangelista et al. (Int J Numer Methods Eng 81(6):761-785, 2010) and corresponds to a suitable finite-strain version of the celebrated Souza-Auricchio model for SMAs (Auricchio and Petrini in Int J Numer Methods Eng 55:1255-1284, 2002; Souza et al. in J Mech A Solids 17:789-806, 1998). We reformulate the model in purely variational fashion under the form of a rate-independent process. Existence of suitably weak (energetic) solutions to the model is obtained by passing to the limit within a constructive time-discretization procedure.

  20. Shape Optimization for Navier-Stokes Equations with Algebraic Turbulence Model: Existence Analysis

    SciTech Connect

    Bulicek, Miroslav Haslinger, Jaroslav Malek, Josef Stebel, Jan

    2009-10-15

    We study a shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to an optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by a generalized stationary Navier-Stokes system with nontrivial mixed boundary conditions. In this paper we prove the existence of solutions both to the generalized Navier-Stokes system and to the shape optimization problem.

  1. Had the Planet Mars Not Existed: Kepler's Equant Model and Its Physical Consequences

    ERIC Educational Resources Information Center

    Bracco, C.; Provost, J.P.

    2009-01-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal…

  2. Adapting Existing Spatial Data Sets to New Uses: An Example from Energy Modeling

    SciTech Connect

    Johanesson, G; Stewart, J S; Barr, C; Sabeff, L B; George, R; Heimiller, D; Milbrandt, A

    2006-06-23

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, and economic projections. These data are available at various spatial and temporal scales, which may be different from those needed by the energy modeling community. If the translation from the original format to the format required by the energy researcher is incorrect, then resulting models can produce misleading conclusions. This is of increasing importance, because of the fine resolution data required by models for new alternative energy sources such as wind and distributed generation. This paper addresses the matter by applying spatial statistical techniques which improve the usefulness of spatial data sets (maps) that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) imputing missing data and (3) merging spatial data sets.

  3. The Power of a Good Idea: Quantitative Modeling of the Spread of Ideas from Epidemiological Models

    SciTech Connect

    Bettencourt, L. M. A.; Cintron-Arias, A.; Kaiser, D. I.; Castillo-Chavez, C.

    2005-05-05

    The population dynamics underlying the diffusion of ideas hold many qualitative similarities to those involved in the spread of infections. In spite of much suggestive evidence this analogy is hardly ever quantified in useful ways. The standard benefit of modeling epidemics is the ability to estimate quantitatively population average parameters, such as interpersonal contact rates, incubation times, duration of infectious periods, etc. In most cases such quantities generalize naturally to the spread of ideas and provide a simple means of quantifying sociological and behavioral patterns. Here we apply several paradigmatic models of epidemics to empirical data on the advent and spread of Feynman diagrams through the theoretical physics communities of the USA, Japan, and the USSR in the period immediately after World War II. This test case has the advantage of having been studied historically in great detail, which allows validation of our results. We estimate the effectiveness of adoption of the idea in the three communities and find values for parameters reflecting both intentional social organization and long lifetimes for the idea. These features are probably general characteristics of the spread of ideas, but not of common epidemics.

  4. Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models

    NASA Astrophysics Data System (ADS)

    de Diesbach, Pablo Brice

    This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.

  5. Can existing climate models be used to study anthropogenic changes in tropical cyclone climate

    SciTech Connect

    Broccoli, A.J.; Manabe, S.

    1990-10-01

    The utility of current generation climate models for studying the influence of greenhouse warming on the tropical storm climatology is examined. A method developed to identify tropical cyclones is applied to a series of model integrations. The global distribution of tropical storms is simulated by these models in a generally realistic manner. While the model resolution is insufficient to reproduce the fine structure of tropical cyclones, the simulated storms become more realistic as resolution is increased. To obtain a preliminary estimate of the response of the tropical cyclone climatology, CO{sub 2} was doubled using models with varying cloud treatments and different horizontal resolutions. In the experiment with prescribed cloudiness, the number of storm-days, a combined measure of the number and duration of tropical storms, undergoes a statistically significant reduction of the number of storm-days is indicated in the experiment with cloud feedback. In both cases the response is independent of horizontal resolution. While the inconclusive nature of these experimental results highlights the uncertainties that remain in examining the details of greenhouse-gas induced climate change, the ability of the models to qualitatively simulate the tropical storm climatology suggests that they are appropriate tools for this problem.

  6. Considering ethics in community eye health planning: perspectives from an existing model.

    PubMed

    Raman, Usha; Sheeladevi, Sethu

    2011-01-01

    Despite the widespread acceptance of the principles of the Alma Ata Declaration of 1978 and the subsequent amendments, health for all has remained a distant dream in many parts of the developing world. Concerns such as the economic efficiency of health systems and their reach and coverage have dominated discussions of public health, with ethics remaining at best a shadowy set of assumptions or at worst completely ignored. Similarly, questions of ethics have been taken for granted and rarely addressed directly in the design of public health models across sectors and are rarely explicitly addressed. This paper uses the experience of the L V Prasad Eye Institute's (LVPEI) pyramidal model of eye healthcare delivery to explore ethical issues in the design and implementation of public health interventions. The LVPEI model evolved over time from its beginnings as a tertiary care centre to a network that spans all levels of eye care service delivery from the community through primary and secondary levels. A previously published analytical framework is applied to this model and the utility of this framework as well as the ethics of the LVPEI model are interrogated. An analytical and prescriptive framework is then evolved that could be used to build in and evaluate ethics in other public health delivery models. PMID:22106660

  7. The quantitative assessment of domino effects caused by overpressure. Part I. Probit models.

    PubMed

    Cozzani, Valerio; Salzano, Ernesto

    2004-03-19

    Accidents caused by domino effect are among the more severe that took place in the chemical and process industry. However, a well established and widely accepted methodology for the quantitative assessment of domino accidents contribution to industrial risk is still missing. Hence, available data on damage to process equipment caused by blast waves were revised in the framework of quantitative risk analysis, aiming at the quantitative assessment of domino effects caused by overpressure. Specific probit models were derived for several categories of process equipment and were compared to other literature approaches for the prediction of probability of damage of equipment loaded by overpressure. The results evidence the importance of using equipment-specific models for the probability of damage and equipment-specific damage threshold values, rather than general equipment correlation, which may lead to errors up to 500%. PMID:15072815

  8. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  9. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    NASA Astrophysics Data System (ADS)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2016-07-01

    coverage. The predictions also provided insights into the EDII, in particular highlighting drought events where missing impact reports may reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis. It highlights the important role that quantitative analysis with impact data can have in providing "ground truth" for drought indicators, alongside more traditional stakeholder-led approaches.

  10. A quantitative analysis to objectively appraise drought indicators and model drought impacts

    NASA Astrophysics Data System (ADS)

    Bachmair, S.; Svensson, C.; Hannaford, J.; Barker, L. J.; Stahl, K.

    2015-09-01

    also provided insights into the EDII, in particular highlighting drought events where missing impact reports reflect a lack of recording rather than true absence of impacts. Overall, the presented quantitative framework proved to be a useful tool for evaluating drought indicators, and to model impact occurrence. In summary, this study demonstrates the information gain for drought monitoring and early warning through impact data collection and analysis, and highlights the important role that quantitative analysis with impacts data can have in providing "ground truth" for drought indicators alongside more traditional stakeholder-led approaches.

  11. Utilization of data estimation via existing models, within a tiered data quality system, for populating species sensitivity distributions

    EPA Science Inventory

    The acquisition toxicity test data of sufficient quality from open literature to fulfill taxonomic diversity requirements can be a limiting factor in the creation of new 304(a) Aquatic Life Criteria. The use of existing models (WebICE and ACE) that estimate acute and chronic eff...

  12. Had the planet Mars not existed: Kepler's equant model and its physical consequences

    NASA Astrophysics Data System (ADS)

    Bracco, C.; Provost, J.-P.

    2009-09-01

    We examine the equant model for the motion of planets, which was the starting point of Kepler's investigations before he modified it because of Mars observations. We show that, up to first order in eccentricity, this model implies for each orbit a velocity, which satisfies Kepler's second law and Hamilton's hodograph, and a centripetal acceleration with an r-2 dependence on the distance to the Sun. If this dependence is assumed to be universal, Kepler's third law follows immediately. This elementary exercise in kinematics for undergraduates emphasizes the proximity of the equant model coming from ancient Greece with our present knowledge. It adds to its historical interest a didactical relevance concerning, in particular, the discussion of the Aristotelian or Newtonian conception of motion.

  13. The Existence and Stability Analysis of the Equilibria in Dengue Disease Infection Model

    NASA Astrophysics Data System (ADS)

    Anggriani, N.; Supriatna, A. K.; Soewono, E.

    2015-06-01

    In this paper we formulate an SIR (Susceptible - Infective - Recovered) model of Dengue fever transmission with constant recruitment. We found a threshold parameter K0, known as the Basic Reproduction Number (BRN). This model has two equilibria, disease-free equilibrium and endemic equilibrium. By constructing suitable Lyapunov function, we show that the disease- free equilibrium is globally asymptotic stable whenever BRN is less than one and when it is greater than one, the endemic equilibrium is globally asymptotic stable. Numerical result shows the dynamic of each compartment together with effect of multiple bio-agent intervention as a control to the dengue transmission.

  14. Benthic-Pelagic Coupling in Biogeochemical and Climate Models: Existing Approaches, Recent developments and Roadblocks

    NASA Astrophysics Data System (ADS)

    Arndt, Sandra

    2016-04-01

    Marine sediments are key components in the Earth System. They host the largest carbon reservoir on Earth, provide the only long term sink for atmospheric CO2, recycle nutrients and represent the most important climate archive. Biogeochemical processes in marine sediments are thus essential for our understanding of the global biogeochemical cycles and climate. They are first and foremost, donor controlled and, thus, driven by the rain of particulate material from the euphotic zone and influenced by the overlying bottom water. Geochemical species may undergo several recycling loops (e.g. authigenic mineral precipitation/dissolution) before they are either buried or diffuse back to the water column. The tightly coupled and complex pelagic and benthic process interplay thus delays recycling flux, significantly modifies the depositional signal and controls the long-term removal of carbon from the ocean-atmosphere system. Despite the importance of this mutual interaction, coupled regional/global biogeochemical models and (paleo)climate models, which are designed to assess and quantify the transformations and fluxes of carbon and nutrients and evaluate their response to past and future perturbations of the climate system either completely neglect marine sediments or incorporate a highly simplified representation of benthic processes. On the other end of the spectrum, coupled, multi-component state-of-the-art early diagenetic models have been successfully developed and applied over the past decades to reproduce observations and quantify sediment-water exchange fluxes, but cannot easily be coupled to pelagic models. The primary constraint here is the high computation cost of simulating all of the essential redox and equilibrium reactions within marine sediments that control carbon burial and benthic recycling fluxes: a barrier that is easily exacerbated if a variety of benthic environments are to be spatially resolved. This presentation provides an integrative overview of

  15. Existing Whole-House Solutions Case Study: Community-Scale Energy Modeling - Southeastern United States

    SciTech Connect

    2014-12-01

    Community-scale energy modeling and testing are useful for determining energy conservation measures that will effectively reduce energy use. To that end, IBACOS analyzed pre-retrofit daily utility data to sort homes by energy consumption, allowing for better targeting of homes for physical audits. Following ASHRAE Guideline 14 normalization procedures, electricity consumption of 1,166 all-electric, production-built homes was modeled. The homes were in two communities: one built in the 1970s and the other in the mid-2000s.

  16. What are the unique attributes of potassium that challenge existing nutrient uptake models?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil potassium (K) availability and acquisition by plant root systems are controlled by complex, interacting processes that make it difficult to assess their individual impacts on crop growth. Mechanistic, mathematical models provide an important tool to enhance understanding of these processes, and...

  17. DOSIMETRY MODELING OF INHALED FORMALDEHYDE: BINNING NASAL FLUX PREDICTIONS FOR QUANTITATIVE RISK ASSESSMENT

    EPA Science Inventory

    Dosimetry Modeling of Inhaled Formaldehyde: Binning Nasal Flux Predictions for Quantitative Risk Assessment. Kimbell, J.S., Overton, J.H., Subramaniam, R.P., Schlosser, P.M., Morgan, K.T., Conolly, R.B., and Miller, F.J. (2001). Toxicol. Sci. 000, 000:000.

    Interspecies e...

  18. Quantitative approaches to utilizing mutational analysis and disulfide crosslinking for modeling a transmembrane domain.

    PubMed Central

    Lee, G. F.; Hazelbauer, G. L.

    1995-01-01

    The transmembrane domain of chemoreceptor Trg from Escherichia coli contains four transmembrane segments in its native homodimer, two from each subunit. We had previously used mutational analysis and sulfhydryl cross-linking between introduced cysteines to obtain data relevant to the three-dimensional organization of this domain. In the current study we used Fourier analysis to assess these data quantitatively for periodicity along the sequences of the segments. The analyses provided a strong indication of alpha-helical periodicity in the first transmembrane segment and a substantial indication of that periodicity for the second segment. On this basis, we considered both segments as idealized alpha-helices and proceeded to model the transmembrane domain as a unit of four helices. For this modeling, we calculated helical crosslinking moments, parameters analogous to helical hydrophobic moments, as a quantitative way of condensing and utilizing a large body of crosslinking data. Crosslinking moments were used to define the relative separation and orientation of helical pairs, thus creating a quantitatively derived model for the transmembrane domain of Trg. Utilization of Fourier transforms to provide a quantitative indication of periodicity in data from analyses of transmembrane segments, in combination with helical crosslinking moments to position helical pairs should be useful in modeling other transmembrane domains. PMID:7549874

  19. Quantitative Model of Systemic Toxicity Using ToxCast and ToxRefDB (SOT)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  20. Framework for a Quantitative Systemic Toxicity Model (FutureToxII)

    EPA Science Inventory

    EPA’s ToxCast program profiles the bioactivity of chemicals in a diverse set of ~700 high throughput screening (HTS) assays. In collaboration with L’Oreal, a quantitative model of systemic toxicity was developed using no effect levels (NEL) from ToxRefDB for 633 chemicals with HT...

  1. Modelling of the shielding capabilities of the existing solid radioactive waste storages at Ignalina NPP.

    PubMed

    Smaizys, Arturas; Poskas, Povilas; Ragaisis, Valdas

    2005-01-01

    There is only one nuclear power plant in Lithuania--Ignalina NPP (INPP). The INPP operates two similar units with design electrical power of 1500 MW. The units were commissioned in 1983 and 1987 respectively. From the beginning of the INPP operation all generated solid radioactive waste was collected and stored at the Soviet type solid radwaste facility located at INPP site. The INPP solid radwaste storage facility consists of four buildings, namely building No. 155, No. 155/1, No. 157 and No. 157/1. The buildings of the INPP solid radwaste storage facility are reinforced concrete structures above ground. State Nuclear Safety Inspectorate (VATESI) has specified that particular safety analysis must be performed for existing radioactive waste storage facilities of the INPP. As part of the safety analysis, shielding capabilities of the walls and roofs of these buildings were analysed. This paper presents radiation shielding analysis of the buildings No. 157 and No. 157/1 that are still in operation. The buildings No. 155 and No. 155/1 are already filled up with the waste and no additional waste loading is expected. PMID:16604672

  2. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT.

    PubMed

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R; La Riviere, Patrick J; Alessio, Adam M

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)(-1), cardiac output = 3, 5, 8 L min(-1)). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  3. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  4. Principles of microRNA Regulation Revealed Through Modeling microRNA Expression Quantitative Trait Loci.

    PubMed

    Budach, Stefan; Heinig, Matthias; Marsico, Annalisa

    2016-08-01

    Extensive work has been dedicated to study mechanisms of microRNA-mediated gene regulation. However, the transcriptional regulation of microRNAs themselves is far less well understood, due to difficulties determining the transcription start sites of transient primary transcripts. This challenge can be addressed using expression quantitative trait loci (eQTLs) whose regulatory effects represent a natural source of perturbation of cis-regulatory elements. Here we used previously published cis-microRNA-eQTL data for the human GM12878 cell line, promoter predictions, and other functional annotations to determine the relationship between functional elements and microRNA regulation. We built a logistic regression model that classifies microRNA/SNP pairs into eQTLs or non-eQTLs with 85% accuracy; shows microRNA-eQTL enrichment for microRNA precursors, promoters, enhancers, and transcription factor binding sites; and depletion for repressed chromatin. Interestingly, although there is a large overlap between microRNA eQTLs and messenger RNA eQTLs of host genes, 74% of these shared eQTLs affect microRNA and host expression independently. Considering microRNA-only eQTLs we find a significant enrichment for intronic promoters, validating the existence of alternative promoters for intragenic microRNAs. Finally, in line with the GM12878 cell line derived from B cells, we find genome-wide association (GWA) variants associated to blood-related traits more likely to be microRNA eQTLs than random GWA and non-GWA variants, aiding the interpretation of GWA results. PMID:27260304

  5. Principles of microRNA Regulation Revealed Through Modeling microRNA Expression Quantitative Trait Loci.

    PubMed

    Budach, Stefan; Heinig, Matthias; Marsico, Annalisa

    2016-08-01

    Extensive work has been dedicated to study mechanisms of microRNA-mediated gene regulation. However, the transcriptional regulation of microRNAs themselves is far less well understood, due to difficulties determining the transcription start sites of transient primary transcripts. This challenge can be addressed using expression quantitative trait loci (eQTLs) whose regulatory effects represent a natural source of perturbation of cis-regulatory elements. Here we used previously published cis-microRNA-eQTL data for the human GM12878 cell line, promoter predictions, and other functional annotations to determine the relationship between functional elements and microRNA regulation. We built a logistic regression model that classifies microRNA/SNP pairs into eQTLs or non-eQTLs with 85% accuracy; shows microRNA-eQTL enrichment for microRNA precursors, promoters, enhancers, and transcription factor binding sites; and depletion for repressed chromatin. Interestingly, although there is a large overlap between microRNA eQTLs and messenger RNA eQTLs of host genes, 74% of these shared eQTLs affect microRNA and host expression independently. Considering microRNA-only eQTLs we find a significant enrichment for intronic promoters, validating the existence of alternative promoters for intragenic microRNAs. Finally, in line with the GM12878 cell line derived from B cells, we find genome-wide association (GWA) variants associated to blood-related traits more likely to be microRNA eQTLs than random GWA and non-GWA variants, aiding the interpretation of GWA results.

  6. Principles of microRNA Regulation Revealed Through Modeling microRNA Expression Quantitative Trait Loci

    PubMed Central

    Budach, Stefan; Heinig, Matthias; Marsico, Annalisa

    2016-01-01

    Extensive work has been dedicated to study mechanisms of microRNA-mediated gene regulation. However, the transcriptional regulation of microRNAs themselves is far less well understood, due to difficulties determining the transcription start sites of transient primary transcripts. This challenge can be addressed using expression quantitative trait loci (eQTLs) whose regulatory effects represent a natural source of perturbation of cis-regulatory elements. Here we used previously published cis-microRNA-eQTL data for the human GM12878 cell line, promoter predictions, and other functional annotations to determine the relationship between functional elements and microRNA regulation. We built a logistic regression model that classifies microRNA/SNP pairs into eQTLs or non-eQTLs with 85% accuracy; shows microRNA-eQTL enrichment for microRNA precursors, promoters, enhancers, and transcription factor binding sites; and depletion for repressed chromatin. Interestingly, although there is a large overlap between microRNA eQTLs and messenger RNA eQTLs of host genes, 74% of these shared eQTLs affect microRNA and host expression independently. Considering microRNA-only eQTLs we find a significant enrichment for intronic promoters, validating the existence of alternative promoters for intragenic microRNAs. Finally, in line with the GM12878 cell line derived from B cells, we find genome-wide association (GWA) variants associated to blood-related traits more likely to be microRNA eQTLs than random GWA and non-GWA variants, aiding the interpretation of GWA results. PMID:27260304

  7. Modeling autism-relevant behavioral phenotypes in rats and mice: Do 'autistic' rodents exist?

    PubMed

    Servadio, Michela; Vanderschuren, Louk J M J; Trezza, Viviana

    2015-09-01

    Autism spectrum disorders (ASD) are among the most severe developmental psychiatric disorders known today, characterized by impairments in communication and social interaction and stereotyped behaviors. However, no specific treatments for ASD are as yet available. By enabling selective genetic, neural, and pharmacological manipulations, animal studies are essential in ASD research. They make it possible to dissect the role of genetic and environmental factors in the pathogenesis of the disease, circumventing the many confounding variables present in human studies. Furthermore, they make it possible to unravel the relationships between altered brain function in ASD and behavior, and are essential to test new pharmacological options and their side-effects. Here, we first discuss the concepts of construct, face, and predictive validity in rodent models of ASD. Then, we discuss how ASD-relevant behavioral phenotypes can be mimicked in rodents. Finally, we provide examples of environmental and genetic rodent models widely used and validated in ASD research. We conclude that, although no animal model can capture, at once, all the molecular, cellular, and behavioral features of ASD, a useful approach is to focus on specific autism-relevant behavioral features to study their neural underpinnings. This approach has greatly contributed to our understanding of this disease, and is useful in identifying new therapeutic targets.

  8. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling.

    PubMed

    Dick, Daniel G; Maxwell, Erin E

    2015-07-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the 'migration model'. PMID:26156130

  9. Bayesian methods for quantitative trait loci mapping based on model selection: approximate analysis using the Bayesian information criterion.

    PubMed Central

    Ball, R D

    2001-01-01

    We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data. PMID:11729175

  10. Image reconstruction with noise and error modelling in quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Tarvainen, Tanja; Pulkkinen, Aki; Cox, Ben T.; Kaipio, Jari P.; Arridge, Simon R.

    2016-03-01

    Quantitative photoacoustic tomography is an emerging imaging technique aimed at estimating the optical parameters inside tissue from photoacoustic images. The method proceeds from photoacoustic tomography by taking the estimated initial pressure distributions as data and estimating the absolute values of the optical parameters. Therefore, both the data and the noise of the second (optical) inverse problem are affected by the method applied to solve the first (acoustic) inverse problem. In this work, the Bayesian approach for quantitative photoacoustic tomography is taken. Modelling of noise and errors and incorporating their statistics into the solution of the inverse problem are investigated.

  11. Models and methods for quantitative analysis of surface-enhanced Raman spectra.

    PubMed

    Li, Shuo; Nyagilo, James O; Dave, Digant P; Gao, Jean

    2014-03-01

    The quantitative analysis of surface-enhanced Raman spectra using scattering nanoparticles has shown the potential and promising applications in in vivo molecular imaging. The diverse approaches have been used for quantitative analysis of Raman pectra information, which can be categorized as direct classical least squares models, full spectrum multivariate calibration models, selected multivariate calibration models, and latent variable regression (LVR) models. However, the working principle of these methods in the Raman spectra application remains poorly understood and a clear picture of the overall performance of each model is missing. Based on the characteristics of the Raman spectra, in this paper, we first provide the theoretical foundation of the aforementioned commonly used models and show why the LVR models are more suitable for quantitative analysis of the Raman spectra. Then, we demonstrate the fundamental connections and differences between different LVR methods, such as principal component regression, reduced-rank regression, partial least square regression (PLSR), canonical correlation regression, and robust canonical analysis, by comparing their objective functions and constraints.We further prove that PLSR is literally a blend of multivariate calibration and feature extraction model that relates concentrations of nanotags to spectrum intensity. These features (a.k.a. latent variables) satisfy two purposes: the best representation of the predictor matrix and correlation with the response matrix. These illustrations give a new understanding of the traditional PLSR and explain why PLSR exceeds other methods in quantitative analysis of the Raman spectra problem. In the end, all the methods are tested on the Raman spectra datasets with different evaluation criteria to evaluate their performance.

  12. Modeling Magnetite Reflectance Spectra Using Hapke Theory and Existing Optical Constants

    NASA Technical Reports Server (NTRS)

    Roush, T. L.; Blewett, D. T.; Cahill, J. T. S.

    2016-01-01

    Magnetite is an accessory mineral found in terrestrial environments, some meteorites, and the lunar surface. The reflectance of magnetite powers is relatively low [1], and this property makes it an analog for other dark Fe- or Ti-bearing components, particularly ilmenite on the lunar surface. The real and imaginary indices of refraction (optical constants) for magnetite are available in the literature [2-3], and online [4]. Here we use these values to calculate the reflectance of particulates and compare these model spectra to reflectance measurements of magnetite available on-line [5].

  13. Quantitative genetic models for describing simultaneous and recursive relationships between phenotypes.

    PubMed Central

    Gianola, Daniel; Sorensen, Daniel

    2004-01-01

    Multivariate models are of great importance in theoretical and applied quantitative genetics. We extend quantitative genetic theory to accommodate situations in which there is linear feedback or recursiveness between the phenotypes involved in a multivariate system, assuming an infinitesimal, additive, model of inheritance. It is shown that structural parameters defining a simultaneous or recursive system have a bearing on the interpretation of quantitative genetic parameter estimates (e.g., heritability, offspring-parent regression, genetic correlation) when such features are ignored. Matrix representations are given for treating a plethora of feedback-recursive situations. The likelihood function is derived, assuming multivariate normality, and results from econometric theory for parameter identification are adapted to a quantitative genetic setting. A Bayesian treatment with a Markov chain Monte Carlo implementation is suggested for inference and developed. When the system is fully recursive, all conditional posterior distributions are in closed form, so Gibbs sampling is straightforward. If there is feedback, a Metropolis step may be embedded for sampling the structural parameters, since their conditional distributions are unknown. Extensions of the model to discrete random variables and to nonlinear relationships between phenotypes are discussed. PMID:15280252

  14. Quantitative model of the phase behavior of recombinant pH-responsive elastin-like polypeptides.

    PubMed

    Mackay, J Andrew; Callahan, Daniel J; Fitzgerald, Kelly N; Chilkoti, Ashutosh

    2010-11-01

    Quantitative models are required to engineer biomaterials with environmentally responsive properties. With this goal in mind, we developed a model that describes the pH-dependent phase behavior of a class of stimulus responsive elastin-like polypeptides (ELPs) that undergo reversible phase separation in response to their solution environment. Under isothermal conditions, charged ELPs can undergo phase separation when their charge is neutralized. Optimization of this behavior has been challenging because the pH at which they phase separate, pHt, depends on their composition, molecular weight, concentration, and temperature. To address this problem, we developed a quantitative model to describe the phase behavior of charged ELPs that uses the Henderson-Hasselbalch relationship to describe the effect of side-chain ionization on the phase-transition temperature of an ELP. The model was validated with pH-responsive ELPs that contained either acidic (Glu) or basic (His) residues. The phase separation of both ELPs fit this model across a range of pH. These results have important implications for applications of pH-responsive ELPs because they provide a quantitative model for the rational design of pH-responsive polypeptides whose transition can be triggered at a specified pH.

  15. Water Use Conservation Scenarios for the Mississippi Delta Using an Existing Regional Groundwater Flow Model

    NASA Astrophysics Data System (ADS)

    Barlow, J. R.; Clark, B. R.

    2010-12-01

    The alluvial plain in northwestern Mississippi, locally referred to as the Delta, is a major agricultural area, which contributes significantly to the economy of Mississippi. Land use in this area can be greater than 90 percent agriculture, primarily for growing catfish, corn, cotton, rice, and soybean. Irrigation is needed to smooth out the vagaries of climate and is necessary for the cultivation of rice and for the optimization of corn and soybean. The Mississippi River Valley alluvial (MRVA) aquifer, which underlies the Delta, is the sole source of water for irrigation, and over use of the aquifer has led to water-level declines, particularly in the central region. The Yazoo-Mississippi-Delta Joint Water Management District (YMD), which is responsible for water issues in the 17-county area that makes up the Delta, is directing resources to reduce the use of water through conservation efforts. The U.S. Geological Survey (USGS) recently completed a regional groundwater flow model of the entire Mississippi embayment, including the Mississippi Delta region, to further our understanding of water availability within the embayment system. This model is being used by the USGS to assist YMD in optimizing their conservation efforts by applying various water-use reduction scenarios, either uniformly throughout the Delta, or in focused areas where there have been large groundwater declines in the MRVA aquifer.

  16. Design of a Representative Low Earth Orbit Satellite to Improve Existing Debris Models

    NASA Technical Reports Server (NTRS)

    Clark, S.; Dietrich, A.; Werremeyer, M.; Fitz-Coy, N.; Liou, J.-C.

    2012-01-01

    This paper summarizes the process and methodologies used in the design of a small-satellite, DebriSat, that represents materials and construction methods used in modern day Low Earth Orbit (LEO) satellites. This satellite will be used in a future hypervelocity impact test with the overall purpose to investigate the physical characteristics of modern LEO satellites after an on-orbit collision. The major ground-based satellite impact experiment used by DoD and NASA in their development of satellite breakup models was conducted in 1992. The target used for that experiment was a Navy Transit satellite (40 cm, 35 kg) fabricated in the 1960 s. Modern satellites are very different in materials and construction techniques from a satellite built 40 years ago. Therefore, there is a need to conduct a similar experiment using a modern target satellite to improve the fidelity of the satellite breakup models. The design of DebriSat will focus on designing and building a next-generation satellite to more accurately portray modern satellites. The design of DebriSat included a comprehensive study of historical LEO satellite designs and missions within the past 15 years for satellites ranging from 10 kg to 5000 kg. This study identified modern trends in hardware, material, and construction practices utilized in recent LEO missions, and helped direct the design of DebriSat.

  17. The effects of the overline running model of the high-speed trains on the existing lines

    NASA Astrophysics Data System (ADS)

    Qian, Yong-Sheng; Zeng, Jun-Wei; Zhang, Xiao-Long; Wang, Jia-Yuan; Lv, Ting-Ting

    2016-09-01

    This paper studies the effect on the existing railway which is made by the train with 216 km/h high-speed when running across over the existing railway. The influence on the railway carrying capacity which is made by the transportation organization mode of the existing railway is analyzed under different parking modes of high-speed trains as well. In order to further study the departure intervals of the train, the average speed and the delay of the train, an automata model under these four-aspects is established. The results of the research in this paper could serve as the theoretical references to the newly built high-speed railways.

  18. Quantitative Genetics and Functional–Structural Plant Growth Models: Simulation of Quantitative Trait Loci Detection for Model Parameters and Application to Potential Yield Optimization

    PubMed Central

    Letort, Véronique; Mahe, Paul; Cournède, Paul-Henry; de Reffye, Philippe; Courtois, Brigitte

    2008-01-01

    Background and Aims Prediction of phenotypic traits from new genotypes under untested environmental conditions is crucial to build simulations of breeding strategies to improve target traits. Although the plant response to environmental stresses is characterized by both architectural and functional plasticity, recent attempts to integrate biological knowledge into genetics models have mainly concerned specific physiological processes or crop models without architecture, and thus may prove limited when studying genotype × environment interactions. Consequently, this paper presents a simulation study introducing genetics into a functional–structural growth model, which gives access to more fundamental traits for quantitative trait loci (QTL) detection and thus to promising tools for yield optimization. Methods The GREENLAB model was selected as a reasonable choice to link growth model parameters to QTL. Virtual genes and virtual chromosomes were defined to build a simple genetic model that drove the settings of the species-specific parameters of the model. The QTL Cartographer software was used to study QTL detection of simulated plant traits. A genetic algorithm was implemented to define the ideotype for yield maximization based on the model parameters and the associated allelic combination. Key Results and Conclusions By keeping the environmental factors constant and using a virtual population with a large number of individuals generated by a Mendelian genetic model, results for an ideal case could be simulated. Virtual QTL detection was compared in the case of phenotypic traits – such as cob weight – and when traits were model parameters, and was found to be more accurate in the latter case. The practical interest of this approach is illustrated by calculating the parameters (and the corresponding genotype) associated with yield optimization of a GREENLAB maize model. The paper discusses the potentials of GREENLAB to represent environment × genotype

  19. Elements of attention in HIV-infected adults: Evaluation of an existing model

    PubMed Central

    Levine, Andrew J.; Hardy, David J.; Barclay, Terry R.; Reinhard, Matthew J.; Cole, Michael M.; Hinkin, Charles H.

    2010-01-01

    Because of the multifactorial nature of neuropsychological tests, attention remains poorly defined from a neuropsychological perspective, and conclusions made regarding attention across studies may be limited due to the different nature of the measures used. Thus, a more definitive schema for this neurocognitive domain is needed. We assessed the applicability of Mirsky and Duncan's (2001) neuropsychological model of attention to a cohort of 104 HIV+ adults. Our analysis resulted in a five-factor structure similar to that of previous studies, which explained 74.5% of the variance. However, based on the psychometric characteristics of the measures comprising each factor, we offer an alternative interpretation of the factors. Findings also indicate that one factor, which is generally not assessed in clinical neuropsychology settings, may be more predictive of real-world behaviors (such as medication adherence) than those composed of traditional measures. Suggestions for further research in this important area are discussed. PMID:17852595

  20. Existence of a metallic phase in a 1D Holstein Hubbard model at half filling

    NASA Astrophysics Data System (ADS)

    Krishna, Phani Murali; Chatterjee, Ashok

    2007-06-01

    The one-dimensional half-filled Holstein-Hubbard model is studied using a series of canonical transformations including phonon coherence effect that partly depends on the electron density and is partly independent and also incorporating the on-site and the nearest-neighbour phonon correlations and the exact Bethe-ansatz solution of Lieb and Wu. It is shown that choosing a better variational phonon state makes the polarons more mobile and widens the intermediate metallic region at the charge-density-wave-spin-density-wave crossover recently predicted by Takada and Chatterjee. The presence of this metallic phase is indeed a favourable situation from the point of view of high temperature superconductivity.

  1. Is astronomy possible with neutral ultrahigh energy cosmic ray particles existing in the standard model?

    SciTech Connect

    Tinyakov, P. G.; Tkachev, I. I.

    2008-03-15

    The recently observed correlation between HiRes stereo cosmic ray events with energies E {approx} 10{sup 19} eV and BL Lacertae objects occurs at an angle that strongly suggests that the primary particles are neutral. We analyze whether this correlation, if not a statistical fluctuation, can be explained within the Standard Model, i.e., assuming only known particles and interactions. We have not found a plausible process that can account for these correlations. The mechanism that comes closest-the conversion of protons into neutrons in the IR background of our Galaxy-still underproduces the required flux of neutral particles by about two orders of magnitude. The situation is different at E {approx} 10{sup 20} eV, where the flux of cosmic rays at the Earth may contain up to a few percent of neutrons, indicating their extragalactic sources.

  2. [Near-infrared spectrum quantitative analysis model based on principal components selected by elastic net].

    PubMed

    Chen, Wan-hui; Liu, Xu-hua; He, Xiong-kui; Min, Shun-geng; Zhang, Lu-da

    2010-11-01

    Elastic net is an improvement of the least-squares method by introducing in L1 and L2 penalties, and it has the advantages of the variable selection. The quantitative analysis model build by Elastic net can improve the prediction accuracy. Using 89 wheat samples as the experiment material, the spectrum principal components of the samples were selected by Elastic net. The analysis model was established for the near-infrared spectrum and the wheat's protein content, and the feasibility of using Elastic net to establish the quantitative analysis model was confirmed. In experiment, the 89 wheat samples were randomly divided into two groups, with 60 samples being the model set and 29 samples being the prediction set. The 60 samples were used to build analysis model to predict the protein contents of the 29 samples, and correlation coefficient (R) of the predicted value and chemistry observed value was 0. 984 9, with the mean relative error being 2.48%. To further investigate the feasibility and stability of the model, the 89 samples were randomly selected five times, with 60 samples to be model set and 29 samples to be prediction set. The five groups of principal components which were selected by Elastic net for building model were basically consistent, and compared with the PCR and PLS method, the model prediction accuracies were all better than PCR and similar with PLS. In view of the fact that Elastic net can realize the variable selection and the model has good prediction, it was shown that Elastic net is suitable method for building chemometrics quantitative analysis model. PMID:21284156

  3. The evolution and extinction of the ichthyosaurs from the perspective of quantitative ecospace modelling

    PubMed Central

    Dick, Daniel G.; Maxwell, Erin E.

    2015-01-01

    The role of niche specialization and narrowing in the evolution and extinction of the ichthyosaurs has been widely discussed in the literature. However, previous studies have concentrated on a qualitative discussion of these variables only. Here, we use the recently developed approach of quantitative ecospace modelling to provide a high-resolution quantitative examination of the changes in dietary and ecological niche experienced by the ichthyosaurs throughout their evolution in the Mesozoic. In particular, we demonstrate that despite recent discoveries increasing our understanding of taxonomic diversity among the ichthyosaurs in the Cretaceous, when viewed from the perspective of ecospace modelling, a clear trend of ecological contraction is visible as early as the Middle Jurassic. We suggest that this ecospace redundancy, if carried through to the Late Cretaceous, could have contributed to the extinction of the ichthyosaurs. Additionally, our results suggest a novel model to explain ecospace change, termed the ‘migration model’. PMID:26156130

  4. 18FDG synthesis and supply: a journey from existing centralized to future decentralized models.

    PubMed

    Uz Zaman, Maseeh; Fatima, Nosheen; Sajjad, Zafar; Zaman, Unaiza; Tahseen, Rabia; Zaman, Areeba

    2014-01-01

    Positron emission tomography (PET) as the functional component of current hybrid imaging (like PET/ CT or PET/MRI) seems to dominate the horizon of medical imaging in coming decades. 18Flourodeoxyglucose (18FDG) is the most commonly used probe in oncology and also in cardiology and neurology around the globe. However, the major capital cost and exorbitant running expenditure of low to medium energy cyclotrons (about 20 MeV) and radiochemistry units are the seminal reasons of low number of cyclotrons but mushroom growth pattern of PET scanners. This fact and longer half-life of 18F (110 minutes) have paved the path of a centralized model in which 18FDG is produced by commercial PET radiopharmacies and the finished product (multi-dose vial with tungsten shielding) is dispensed to customers having only PET scanners. This indeed reduced the cost but has limitations of dependence upon timely arrival of daily shipments as delay caused by any reason results in cancellation or rescheduling of the PET procedures. In recent years, industry and academia have taken a step forward by producing low energy, table top cyclotrons with compact and automated radiochemistry units (Lab- on-Chip). This decentralized strategy enables the users to produce on-demand doses of PET probe themselves at reasonably low cost using an automated and user-friendly technology. This technological development would indeed provide a real impetus to the availability of complete set up of PET based molecular imaging at an affordable cost to the developing countries.

  5. A systematic review of the existing models of disordered eating: Do they inform the development of effective interventions?

    PubMed

    Pennesi, Jamie-Lee; Wade, Tracey D

    2016-02-01

    Despite significant advances in the development of prevention and treatment interventions for eating disorders and disordered eating over the last decade, there still remains a pressing need to develop more effective interventions. In line with the 2008 Medical Research Council (MRC) evaluation framework from the United Kingdom for the development and evaluation of complex interventions to improve health, the development of sound theory is a necessary precursor to the development of effective interventions. The aim of the current review was to identify the existing models for disordered eating and to identify those models which have helped inform the development of interventions for disordered eating. In addition, we examine the variables that most commonly appear across these models, in terms of future implications for the development of interventions for disordered eating. While an extensive range of theoretical models for the development of disordered eating were identified (N=54), only ten (18.5%) had progressed beyond mere description and to the development of interventions that have been evaluated. It is recommended that future work examines whether interventions in eating disorders increase in efficacy when developed in line with theoretical considerations, that initiation of new models gives way to further development of existing models, and that there be greater utilisation of intervention studies to inform the development of theory. PMID:26781985

  6. 18FDG synthesis and supply: a journey from existing centralized to future decentralized models.

    PubMed

    Uz Zaman, Maseeh; Fatima, Nosheen; Sajjad, Zafar; Zaman, Unaiza; Tahseen, Rabia; Zaman, Areeba

    2014-01-01

    Positron emission tomography (PET) as the functional component of current hybrid imaging (like PET/ CT or PET/MRI) seems to dominate the horizon of medical imaging in coming decades. 18Flourodeoxyglucose (18FDG) is the most commonly used probe in oncology and also in cardiology and neurology around the globe. However, the major capital cost and exorbitant running expenditure of low to medium energy cyclotrons (about 20 MeV) and radiochemistry units are the seminal reasons of low number of cyclotrons but mushroom growth pattern of PET scanners. This fact and longer half-life of 18F (110 minutes) have paved the path of a centralized model in which 18FDG is produced by commercial PET radiopharmacies and the finished product (multi-dose vial with tungsten shielding) is dispensed to customers having only PET scanners. This indeed reduced the cost but has limitations of dependence upon timely arrival of daily shipments as delay caused by any reason results in cancellation or rescheduling of the PET procedures. In recent years, industry and academia have taken a step forward by producing low energy, table top cyclotrons with compact and automated radiochemistry units (Lab- on-Chip). This decentralized strategy enables the users to produce on-demand doses of PET probe themselves at reasonably low cost using an automated and user-friendly technology. This technological development would indeed provide a real impetus to the availability of complete set up of PET based molecular imaging at an affordable cost to the developing countries. PMID:25556425

  7. Evaluation of Modeled and Measured Energy Savings in Existing All Electric Public Housing in the Pacific Northwest

    SciTech Connect

    Gordon, Andrew; Lubliner, Michael; Howard, Luke; Kunkle, Rick; Salzberg, Emily

    2014-04-01

    This project analyzes the cost effectiveness of energy savings measures installed by a large public housing authority in Salishan, a community in Tacoma Washington. Research focuses on the modeled and measured energy usage of the first six phases of construction, and compares the energy usage of those phases to phase 7. Market-ready energy solutions were also evaluated to improve the efficiency of affordable housing for new and existing (built since 2001) affordable housing in the marine climate of Washington State.

  8. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum

    PubMed Central

    Milic, Natasa M.; Trajkovic, Goran Z.; Bukumiric, Zoran M.; Cirkovic, Andja; Nikolic, Ivan M.; Milin, Jelena S.; Milic, Nikola V.; Savic, Marko D.; Corac, Aleksandar M.; Marinkovic, Jelena M.; Stanisavljevic, Dejana M.

    2016-01-01

    Background Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. Methods This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013–14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Results Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001). Conclusion This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient

  9. PHYSIOLOGICALLY-BASED PHARMACOKINETIC ( PBPK ) MODEL FOR METHYL TERTIARY BUTYL ETHER ( MTBE ): A REVIEW OF EXISTING MODELS

    EPA Science Inventory

    MTBE is a volatile organic compound used as an oxygenate additive to gasoline, added to comply with the 1990 Clean Air Act. Previous PBPK models for MTBE were reviewed and incorporated into the Exposure Related Dose Estimating Model (ERDEM) software. This model also included an e...

  10. Improving the quantitative accuracy of cerebral oxygen saturation in monitoring the injured brain using atlas based Near Infrared Spectroscopy models.

    PubMed

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J E; Su, Zhangjie; Dehghani, Hamid

    2016-08-01

    The application of Near Infrared Spectroscopy (NIRS) for the monitoring of the cerebral oxygen saturation within the brain is well established, albeit using temporal data that can only measure relative changes of oxygenation state of the brain from a baseline. The focus of this investigation is to demonstrate that hybridisation of existing near infrared probe designs and reconstruction techniques can pave the way to produce a system and methods that can be used to monitor the absolute oxygen saturation in the injured brain. Using registered Atlas models in simulation, a novel method is outlined by which the quantitative accuracy and practicality of NIRS for specific use in monitoring the injured brain, can be improved, with cerebral saturation being recovered to within 10.1 ± 1.8% of the expected values. PMID:27003677

  11. Quantitative explanation of circuit experiments and real traffic using the optimal velocity model

    NASA Astrophysics Data System (ADS)

    Nakayama, Akihiro; Kikuchi, Macoto; Shibata, Akihiro; Sugiyama, Yuki; Tadaki, Shin-ichi; Yukawa, Satoshi

    2016-04-01

    We have experimentally confirmed that the occurrence of a traffic jam is a dynamical phase transition (Tadaki et al 2013 New J. Phys. 15 103034, Sugiyama et al 2008 New J. Phys. 10 033001). In this study, we investigate whether the optimal velocity (OV) model can quantitatively explain the results of experiments. The occurrence and non-occurrence of jammed flow in our experiments agree with the predictions of the OV model. We also propose a scaling rule for the parameters of the model. Using this rule, we obtain critical density as a function of a single parameter. The obtained critical density is consistent with the observed values for highway traffic.

  12. Experimentally validated quantitative linear model for the device physics of elastomeric microfluidic valves

    NASA Astrophysics Data System (ADS)

    Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French

    2007-03-01

    A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.

  13. Quantitative mechanistically based dose-response modeling with endocrine-active compounds.

    PubMed Central

    Andersen, M E; Conolly, R B; Faustman, E M; Kavlock, R J; Portier, C J; Sheehan, D M; Wier, P J; Ziese, L

    1999-01-01

    A wide range of toxicity test methods is used or is being developed for assessing the impact of endocrine-active compounds (EACs) on human health. Interpretation of these data and their quantitative use in human and ecologic risk assessment will be enhanced by the availability of mechanistically based dose-response (MBDR) models to assist low-dose, interspecies, and (italic)in vitro(/italic) to (italic)in vivo(/italic) extrapolations. A quantitative dose-response modeling work group examined the state of the art for developing MBDR models for EACs and the near-term needs to develop, validate, and apply these models for risk assessments. Major aspects of this report relate to current status of these models, the objectives/goals in MBDR model development for EACs, low-dose extrapolation issues, regulatory inertia impeding acceptance of these approaches, and resource/data needs to accelerate model development and model acceptance by the research and the regulatory community. PMID:10421774

  14. Inhibition of microglial activation attenuates the development but not existing hypersensitivity in a rat model of neuropathy.

    PubMed

    Raghavendra, Vasudeva; Tanga, Flobert; DeLeo, Joyce A

    2003-08-01

    Microglia, the intrinsic macrophages of the central nervous system, have previously been shown to be activated in the spinal cord in several rat mononeuropathy models. Activation of microglia and subsequent release of proinflammatory cytokines are known to play a role in inducing a behavioral hypersensitive state (hyperalgesia and allodynia) in these animals. The present study was undertaken to determine whether minocycline, an inhibitor of microglial activation, could attenuate both the development and existing mechanical allodynia and hyperalgesia in an L5 spinal nerve transection model of neuropathic pain. In a preventive paradigm (to study the effect on the development of hypersensitive behaviors), minocycline (10, 20, or 40 mg/kg intraperitoneally) was administered daily, beginning 1 h before nerve transection. This regimen produced a decrease in mechanical hyperalgesia and allodynia, with a maximum inhibitory effect observed at the dose of 20 and 40 mg/kg. The attenuation of the development of hyperalgesia and allodynia by minocycline was associated with an inhibitory action on microglial activation and suppression of proinflammatory cytokines at the L5 lumbar spinal cord of the nerveinjured animals. The effect of minocycline on existing allodynia was examined after its intraperitoneal administration initiated on day 5 post-L5 nerve transection. Although the postinjury administration of minocycline significantly inhibited microglial activation in neuropathic rats, it failed to attenuate existing hyperalgesia and allodynia. These data demonstrate that inhibition of microglial activation attenuated the development of behavioral hypersensitivity in a rat model of neuropathic pain but had no effect on the treatment of existing mechanical allodynia and hyperalgesia.

  15. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution

    PubMed Central

    Nielsen, Rasmus

    2015-01-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  16. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.

    PubMed

    Rohlfs, Rori V; Nielsen, Rasmus

    2015-09-01

    A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence

  17. Incorporation of caffeine into a quantitative model of fatigue and sleep.

    PubMed

    Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A

    2011-03-21

    A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation.

  18. Tannin structural elucidation and quantitative ³¹P NMR analysis. 1. Model compounds.

    PubMed

    Melone, Federica; Saladino, Raffaele; Lange, Heiko; Crestini, Claudia

    2013-10-01

    Tannins and flavonoids are secondary metabolites of plants that display a wide array of biological activities. This peculiarity is related to the inhibition of extracellular enzymes that occurs through the complexation of peptides by tannins. Not only the nature of these interactions, but more fundamentally also the structure of these heterogeneous polyphenolic molecules are not completely clear. This first paper describes the development of a new analytical method for the structural characterization of tannins on the basis of tannin model compounds employing an in situ labeling of all labile H groups (aliphatic OH, phenolic OH, and carboxylic acids) with a phosphorus reagent. The ³¹P NMR analysis of ³¹P-labeled samples allowed the unprecedented quantitative and qualitative structural characterization of hydrolyzable tannins, proanthocyanidins, and catechin tannin model compounds, forming the foundations for the quantitative structural elucidation of a variety of actual tannin samples described in part 2 of this series. PMID:24059814

  19. The role of pre-existing disturbances in the effect of marine reserves on coastal ecosystems: a modelling approach.

    PubMed

    Savina, Marie; Condie, Scott A; Fulton, Elizabeth A

    2013-01-01

    We have used an end-to-end ecosystem model to explore responses over 30 years to coastal no-take reserves covering up to 6% of the fifty thousand square kilometres of continental shelf and slope off the coast of New South Wales (Australia). The model is based on the Atlantis framework, which includes a deterministic, spatially resolved three-dimensional biophysical model that tracks nutrient flows through key biological groups, as well as extraction by a range of fisheries. The model results support previous empirical studies in finding clear benefits of reserves to top predators such as sharks and rays throughout the region, while also showing how many of their major prey groups (including commercial species) experienced significant declines. It was found that the net impact of marine reserves was dependent on the pre-existing levels of disturbance (i.e. fishing pressure), and to a lesser extent on the size of the marine reserves. The high fishing scenario resulted in a strongly perturbed system, where the introduction of marine reserves had clear and mostly direct effects on biomass and functional biodiversity. However, under the lower fishing pressure scenario, the introduction of marine reserves caused both direct positive effects, mainly on shark groups, and indirect negative effects through trophic cascades. Our study illustrates the need to carefully align the design and implementation of marine reserves with policy and management objectives. Trade-offs may exist not only between fisheries and conservation objectives, but also among conservation objectives. PMID:23593432

  20. The complex genetic and molecular basis of a model quantitative trait

    PubMed Central

    Linder, Robert A.; Seidl, Fabian; Ha, Kimberly; Ehrenreich, Ian M.

    2016-01-01

    Quantitative traits are often influenced by many loci with small effects. Identifying most of these loci and resolving them to specific genes or genetic variants is challenging. Yet, achieving such a detailed understanding of quantitative traits is important, as it can improve our knowledge of the genetic and molecular basis of heritable phenotypic variation. In this study, we use a genetic mapping strategy that involves recurrent backcrossing with phenotypic selection to obtain new insights into an ecologically, industrially, and medically relevant quantitative trait—tolerance of oxidative stress, as measured based on resistance to hydrogen peroxide. We examine the genetic basis of hydrogen peroxide resistance in three related yeast crosses and detect 64 distinct genomic loci that likely influence the trait. By precisely resolving or cloning a number of these loci, we demonstrate that a broad spectrum of cellular processes contribute to hydrogen peroxide resistance, including DNA repair, scavenging of reactive oxygen species, stress-induced MAPK signaling, translation, and water transport. Consistent with the complex genetic and molecular basis of hydrogen peroxide resistance, we show two examples where multiple distinct causal genetic variants underlie what appears to be a single locus. Our results improve understanding of the genetic and molecular basis of a highly complex, model quantitative trait. PMID:26510497

  1. Evaluation of Lung Metastasis in Mouse Mammary Tumor Models by Quantitative Real-time PCR

    PubMed Central

    Abt, Melissa A.; Grek, Christina L.; Ghatnekar, Gautam S.; Yeh, Elizabeth S.

    2016-01-01

    Metastatic disease is the spread of malignant tumor cells from the primary cancer site to a distant organ and is the primary cause of cancer associated death 1. Common sites of metastatic spread include lung, lymph node, brain, and bone 2. Mechanisms that drive metastasis are intense areas of cancer research. Consequently, effective assays to measure metastatic burden in distant sites of metastasis are instrumental for cancer research. Evaluation of lung metastases in mammary tumor models is generally performed by gross qualitative observation of lung tissue following dissection. Quantitative methods of evaluating metastasis are currently limited to ex vivo and in vivo imaging based techniques that require user defined parameters. Many of these techniques are at the whole organism level rather than the cellular level 3–6. Although newer imaging methods utilizing multi-photon microscopy are able to evaluate metastasis at the cellular level 7, these highly elegant procedures are more suited to evaluating mechanisms of dissemination rather than quantitative assessment of metastatic burden. Here, a simple in vitro method to quantitatively assess metastasis is presented. Using quantitative Real-time PCR (QRT-PCR), tumor cell specific mRNA can be detected within the mouse lung tissue. PMID:26862835

  2. Quantitative Regression Models for the Prediction of Chemical Properties by an Efficient Workflow.

    PubMed

    Yin, Yongmin; Xu, Congying; Gu, Shikai; Li, Weihua; Liu, Guixia; Tang, Yun

    2015-10-01

    Rapid safety assessment is more and more needed for the increasing chemicals both in chemical industries and regulators around the world. The traditional experimental methods couldn't meet the current demand any more. With the development of the information technology and the growth of experimental data, in silico modeling has become a practical and rapid alternative for the assessment of chemical properties, especially for the toxicity prediction of organic chemicals. In this study, a quantitative regression workflow was built by KNIME to predict chemical properties. With this regression workflow, quantitative values of chemical properties can be obtained, which is different from the binary-classification model or multi-classification models that can only give qualitative results. To illustrate the usage of the workflow, two predictive models were constructed based on datasets of Tetrahymena pyriformis toxicity and Aqueous solubility. The qcv (2) and qtest (2) of 5-fold cross validation and external validation for both types of models were greater than 0.7, which implies that our models are robust and reliable, and the workflow is very convenient and efficient in prediction of various chemical properties. PMID:27490968

  3. A quantitative dynamic systems model of health-related quality of life among older adults

    PubMed Central

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  4. A quantitative dynamic systems model of health-related quality of life among older adults.

    PubMed

    Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela

    2015-01-01

    Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722

  5. Semi-quantitative simulation for reasoning about physiological models of drug kinetics and effects.

    PubMed

    Leemann, T D; Blaschke, T F

    1990-12-01

    Inter- and intra-individual pharmacokinetic or pharmacodynamic variability is a major cause of adverse drug reactions or ineffective therapy. We are developing a computer-based tool for predicting the consequences of different physiological and pathological states and for reasoning about the possible causes of observed variability that may be useful both in a clinical decision support environment for drug monitoring and as a research aid in the investigation of the influence of physiological factors on drug response. It is based on a physiological approach to pharmacokinetic modeling in which actual anatomical or physiological entities, such as organs, tissues or blood flows, are represented. These models serve as the basis for semi-quantitative simulation, a method linking classical quantitative simulation (by numerical integration of differential equations) with artificial intelligence-based qualitative simulation techniques. This approach retains the mathematical power of the Systems Dynamics method for solving complex, time-varying systems containing feed-back loops, which are intractable for current qualitative knowledge representation techniques, and extends it with the causal reasoning and explanation power of symbolic inference techniques used in expert systems. It also allows problem solving in situations, so common in medicine, where initial values of variables and parameters cannot be estimated precisely. Simulation outputs are intended to be qualitatively, but not necessarily quantitatively, correct. The semi-quantitative simulation method was originally developed in MacLisp on a DEC 2060 and applied to modeling cardio-vascular physiology. We are porting the code to Common Lisp on a Macintosh and adapting the approach to pharmacology, concentrating on drug metabolism issues, with lidocaine pharmacokinetics as a test case. PMID:2263925

  6. Curating and Preparing High-Throughput Screening Data for Quantitative Structure-Activity Relationship Modeling.

    PubMed

    Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2016-01-01

    Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described.

  7. The Analysis of Quantitative Traits for Simple Genetic Models from Parental, F1 and Backcross Data

    PubMed Central

    Elston, R. C.; Stewart, John

    1973-01-01

    The following models are considered for the genetic determination of quantitative traits: segregation at one locus, at two linked loci, at any number of equal and additive unlinked loci, and at one major locus and an indefinite number of equal and additive loci. In each case an appropriate likelihood is given for data on parental, F1 and backcross individuals, assuming that the environmental variation is normally distributed. Methods of testing and comparing the various models are presented, and methods are suggested for the simultaneous analysis of two or more traits. PMID:4711900

  8. Curating and Preparing High-Throughput Screening Data for Quantitative Structure-Activity Relationship Modeling.

    PubMed

    Kim, Marlene T; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2016-01-01

    Publicly available bioassay data often contains errors. Curating massive bioassay data, especially high-throughput screening (HTS) data, for Quantitative Structure-Activity Relationship (QSAR) modeling requires the assistance of automated data curation tools. Using automated data curation tools are beneficial to users, especially ones without prior computer skills, because many platforms have been developed and optimized based on standardized requirements. As a result, the users do not need to extensively configure the curation tool prior to the application procedure. In this chapter, a freely available automatic tool to curate and prepare HTS data for QSAR modeling purposes will be described. PMID:27518634

  9. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    PubMed Central

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  10. Linear and nonlinear quantitative structure-property relationship modelling of skin permeability.

    PubMed

    Khajeh, A; Modarress, H

    2014-01-01

    In this work, quantitative structure-property relationship (QSPR) models were developed to estimate skin permeability based on theoretically derived molecular descriptors and a diverse set of experimental data. The newly developed method combining modified particle swarm optimization (MPSO) and multiple linear regression (MLR) was used to select important descriptors and develop the linear model using a training set of 225 compounds. The adaptive neuro-fuzzy inference system (ANFIS) was used as an efficient nonlinear method to correlate the selected descriptors with experimental skin permeability data (log Kp). The linear and nonlinear models were assessed by internal and external validation. The obtained models with three descriptors show good predictive ability for the test set, with coefficients of determination for the MPSO-MLR and ANFIS models equal to 0.874 and 0.890, respectively. The QSPR study suggests that hydrophobicity (encoded as log P) is the most important factor in transdermal penetration. PMID:24090175

  11. CSML2SBML: a novel tool for converting quantitative biological pathway models from CSML into SBML.

    PubMed

    Li, Chen; Nagasaki, Masao; Ikeda, Emi; Sekiya, Yayoi; Miyano, Satoru

    2014-07-01

    CSML and SBML are XML-based model definition standards which are developed with the aim of creating exchange formats for modeling, visualizing and simulating biological pathways. In this article we report a release of a format convertor for quantitative pathway models, namely CSML2SBML. It translates models encoded by CSML into SBML without loss of structural and kinetic information. The simulation and parameter estimation of the resulting SBML model can be carried out with compliant tool CellDesigner for further analysis. The convertor is based on the standards CSML version 3.0 and SBML Level 2 Version 4. In our experiments, 11 out of 15 pathway models in CSML model repository and 228 models in Macrophage Pathway Knowledgebase (MACPAK) are successfully converted to SBML models. The consistency of the resulting model is validated by libSBML Consistency Check of CellDesigner. Furthermore, the converted SBML model assigned with the kinetic parameters translated from CSML model can reproduce the same dynamics with CellDesigner as CSML one running on Cell Illustrator. CSML2SBML, along with its instructions and examples for use are available at http://csml2sbml.csml.org. PMID:24881961

  12. CSML2SBML: a novel tool for converting quantitative biological pathway models from CSML into SBML.

    PubMed

    Li, Chen; Nagasaki, Masao; Ikeda, Emi; Sekiya, Yayoi; Miyano, Satoru

    2014-07-01

    CSML and SBML are XML-based model definition standards which are developed with the aim of creating exchange formats for modeling, visualizing and simulating biological pathways. In this article we report a release of a format convertor for quantitative pathway models, namely CSML2SBML. It translates models encoded by CSML into SBML without loss of structural and kinetic information. The simulation and parameter estimation of the resulting SBML model can be carried out with compliant tool CellDesigner for further analysis. The convertor is based on the standards CSML version 3.0 and SBML Level 2 Version 4. In our experiments, 11 out of 15 pathway models in CSML model repository and 228 models in Macrophage Pathway Knowledgebase (MACPAK) are successfully converted to SBML models. The consistency of the resulting model is validated by libSBML Consistency Check of CellDesigner. Furthermore, the converted SBML model assigned with the kinetic parameters translated from CSML model can reproduce the same dynamics with CellDesigner as CSML one running on Cell Illustrator. CSML2SBML, along with its instructions and examples for use are available at http://csml2sbml.csml.org.

  13. Exploring Existence Value

    NASA Astrophysics Data System (ADS)

    Madariaga, Bruce; McConnell, Kenneth E.

    1987-05-01

    The notion that individuals value the preservation of water resources independent of their own use of these resources is discussed. Issues in defining this value, termed "existence value," are explored. Economic models are employed to assess the role of existence value in benefit-cost analysis. The motives underlying existence value are shown to matter to contingent valuation measurement of existence benefits. A stylized contingent valuation experiment is used to study nonusers' attitudes regarding projects to improve water quality in the Chesapeake Bay. Survey results indicate that altruism is one of the motives underlying existence value and that goods other than environmental and natural resources may provide existence benefits.

  14. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    PubMed

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. PMID:23892022

  15. Downscaling SSPs in Bangladesh - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    NASA Astrophysics Data System (ADS)

    Allan, A.; Barbour, E.; Salehin, M.; Hutton, C.; Lázár, A. N.; Nicholls, R. J.; Rahman, M. M.

    2015-12-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  16. Quantitative structure-activity relationship (QSAR) for insecticides: development of predictive in vivo insecticide activity models.

    PubMed

    Naik, P K; Singh, T; Singh, H

    2009-07-01

    Quantitative structure-activity relationship (QSAR) analyses were performed independently on data sets belonging to two groups of insecticides, namely the organophosphates and carbamates. Several types of descriptors including topological, spatial, thermodynamic, information content, lead likeness and E-state indices were used to derive quantitative relationships between insecticide activities and structural properties of chemicals. A systematic search approach based on missing value, zero value, simple correlation and multi-collinearity tests as well as the use of a genetic algorithm allowed the optimal selection of the descriptors used to generate the models. The QSAR models developed for both organophosphate and carbamate groups revealed good predictability with r(2) values of 0.949 and 0.838 as well as [image omitted] values of 0.890 and 0.765, respectively. In addition, a linear correlation was observed between the predicted and experimental LD(50) values for the test set data with r(2) of 0.871 and 0.788 for both the organophosphate and carbamate groups, indicating that the prediction accuracy of the QSAR models was acceptable. The models were also tested successfully from external validation criteria. QSAR models developed in this study should help further design of novel potent insecticides.

  17. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    PubMed

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article.

  18. Downscaling SSPs in the GBM Delta - Integrating Science, Modelling and Stakeholders Through Qualitative and Quantitative Scenarios

    NASA Astrophysics Data System (ADS)

    Allan, Andrew; Barbour, Emily; Salehin, Mashfiqus; Munsur Rahman, Md.; Hutton, Craig; Lazar, Attila

    2016-04-01

    A downscaled scenario development process was adopted in the context of a project seeking to understand relationships between ecosystem services and human well-being in the Ganges-Brahmaputra delta. The aim was to link the concerns and priorities of relevant stakeholders with the integrated biophysical and poverty models used in the project. A 2-stage process was used to facilitate the connection between stakeholders concerns and available modelling capacity: the first to qualitatively describe what the future might look like in 2050; the second to translate these qualitative descriptions into the quantitative form required by the numerical models. An extended, modified SSP approach was adopted, with stakeholders downscaling issues identified through interviews as being priorities for the southwest of Bangladesh. Detailed qualitative futures were produced, before modellable elements were quantified in conjunction with an expert stakeholder cadre. Stakeholder input, using the methods adopted here, allows the top-down focus of the RCPs to be aligned with the bottom-up approach needed to make the SSPs appropriate at the more local scale, and also facilitates the translation of qualitative narrative scenarios into a quantitative form that lends itself to incorporation of biophysical and socio-economic indicators. The presentation will describe the downscaling process in detail, and conclude with findings regarding the importance of stakeholder involvement (and logistical considerations), balancing model capacity with expectations and recommendations on SSP refinement at local levels.

  19. New Tools and the Road to Quantitative Models in Sedimentary Provenance Analysis

    NASA Astrophysics Data System (ADS)

    von Eynatten, Hilmar

    2010-05-01

    Sedimentary provenance analysis is one of the major techniques to link source area geology, climate evolution, and basin dynamics to the compositional characteristics of the clastic basin fill. The high potential of sediments for precise chronostratigraphic calibration in combination with state-of-the-art provenance analysis allows for detailed reconstruction of source area evolution in space and time. A wealth of new and/or refined analytical techniques has been developed in the last decade, especially regarding high-precision single-grain geochemical and geochronological techniques. Accordingly, ultrastable heavy minerals such as rutile or zircon provide inert mineral tracers in sedimentary systems and their analysis yield precise information on source rock petrology and chronology. In terms of quantitative provenance analysis there is, however, a strong need for connecting these detailed information on specific source rocks to the bulk mass transfer and sediment modification from source to sink. Such quantitative provenance models are still in their infancy for a number of reasons, among them (1) the overall complexity of the processes involved including multiple feedback mechanisms, (2) the heterogeneity of data bases with respect to large-scale basin-wide studies, and (3) the lack of tailor-made and user-friendly statistical-numerical models allowing for both forward and inverse modelling that consider the compositional nature of most bulk sediment data. First steps towards fully quantitative models include (i) development of algorithms relating petrographic-mineralogic and geochemical data to sediment grain size, (ii) quantifying chemical, physical, and biological processes and their impact on sediment production and modification, (iii) compositional mixture models, and (iv) verifying these analytical modules in large-scale modern systems, followed by (v) similar ancient systems that are even more complicated due to diagenetic processes.

  20. Quantitative SHG imaging in osteoarthritis model mice, implying a diagnostic application.

    PubMed

    Kiyomatsu, Hiroshi; Oshima, Yusuke; Saitou, Takashi; Miyazaki, Tsuyoshi; Hikita, Atsuhiko; Miura, Hiromasa; Iimura, Tadahiro; Imamura, Takeshi

    2015-02-01

    Osteoarthritis (OA) restricts the daily activities of patients and significantly decreases their quality of life. The development of non-invasive quantitative methods for properly diagnosing and evaluating the process of degeneration of articular cartilage due to OA is essential. Second harmonic generation (SHG) imaging enables the observation of collagen fibrils in live tissues or organs without staining. In the present study, we employed SHG imaging of the articular cartilage in OA model mice ex vivo. Consequently, three-dimensional SHG imaging with successive image processing and statistical analyses allowed us to successfully characterize histopathological changes in the articular cartilage consistently confirmed on histological analyses. The quantitative SHG imaging technique presented in this study constitutes a diagnostic application of this technology in the setting of OA. PMID:25780732

  1. Mechanical Model Analysis for Quantitative Evaluation of Liver Fibrosis Based on Ultrasound Tissue Elasticity Imaging

    NASA Astrophysics Data System (ADS)

    Shiina, Tsuyoshi; Maki, Tomonori; Yamakawa, Makoto; Mitake, Tsuyoshi; Kudo, Masatoshi; Fujimoto, Kenji

    2012-07-01

    Precise evaluation of the stage of chronic hepatitis C with respect to fibrosis has become an important issue to prevent the occurrence of cirrhosis and to initiate appropriate therapeutic intervention such as viral eradication using interferon. Ultrasound tissue elasticity imaging, i.e., elastography can visualize tissue hardness/softness, and its clinical usefulness has been studied to detect and evaluate tumors. We have recently reported that the texture of elasticity image changes as fibrosis progresses. To evaluate fibrosis progression quantitatively on the basis of ultrasound tissue elasticity imaging, we introduced a mechanical model of fibrosis progression and simulated the process by which hepatic fibrosis affects elasticity images and compared the results with those clinical data analysis. As a result, it was confirmed that even in diffuse diseases like chronic hepatitis, the patterns of elasticity images are related to fibrous structural changes caused by hepatic disease and can be used to derive features for quantitative evaluation of fibrosis stage.

  2. A quantitative model for flux flow resistivity and Nernst effect of vortex fluid in high-temperature superconductors

    NASA Astrophysics Data System (ADS)

    Li, Rong; She, Zhen-Su; Yin, Lan; State Key Laboratory for Turbulence; Complex Systems Team

    Transport properties of vortex fluid in high-temperature superconductors have been described in terms of viscous dynamics of magnetic and thermal vortices. We have constructed a quantitative model by extending the Bardeen-Stephen model of damping viscosity to include the contributions of flux pinning in low temperature and vortex-vortex interaction in high magnetic field. A uniformly accurate description of flux flow resistivity and Nernst signal is achieved for empirical data over a wide range of temperature and magnetic field strength. A discrepancy of three orders of magnitude between data and Anderson model of Nernst signal is pointed out, suggesting the existence of anomalous transport in high-temperature superconductor beyond mere quantum and thermal fluctuations. The model enables to derive a set of physical parameters characterizing the vortex dynamics from the Nernst signal, as we illustrate with an analysis of six samples of Bi2Sr2-yLayCuO6 and Bi2Sr2CaCu2O8+δ.

  3. D-Factor: A Quantitative Model of Application Slow-Down in Multi-Resource Shared Systems

    SciTech Connect

    Lim, Seung-Hwan; Huh, Jae-Seok; Kim, Youngjae; Shipman, Galen M; Das, Chita

    2012-01-01

    Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price - resource contention among jobs increases job completion time. In this paper, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We also show that the model can be integrated with an existing on-line scheduler to minimize the makespan of workloads.

  4. Modelling CEC variations versus structural iron reduction levels in dioctahedral smectites. Existing approaches, new data and model refinements.

    PubMed

    Hadi, Jebril; Tournassat, Christophe; Ignatiadis, Ioannis; Greneche, Jean Marc; Charlet, Laurent

    2013-10-01

    A model was developed to describe how the 2:1 layer excess negative charge induced by the reduction of Fe(III) to Fe(II) by sodium dithionite buffered with citrate-bicarbonate is balanced and applied to nontronites. This model is based on new experimental data and extends structural interpretation introduced by a former model [36-38]. The 2:1 layer negative charge increase due to Fe(III) to Fe(II) reduction is balanced by an excess adsorption of cations in the clay interlayers and a specific sorption of H(+) from solution. Prevalence of one compensating mechanism over the other is related to the growing lattice distortion induced by structural Fe(III) reduction. At low reduction levels, cation adsorption dominates and some of the incorporated protons react with structural OH groups, leading to a dehydroxylation of the structure. Starting from a moderate reduction level, other structural changes occur, leading to a reorganisation of the octahedral and tetrahedral lattice: migration or release of cations, intense dehydroxylation and bonding of protons to undersaturated oxygen atoms. Experimental data highlight some particular properties of ferruginous smectites regarding chemical reduction. Contrary to previous assumptions, the negative layer charge of nontronites does not only increase towards a plateau value upon reduction. A peak is observed in the reduction domain. After this peak, the negative layer charge decreases upon extended reduction (>30%). The decrease is so dramatic that the layer charge of highly reduced nontronites can fall below that of its fully oxidised counterpart. Furthermore, the presence of a large amount of tetrahedral Fe seems to promote intense clay structural changes and Fe reducibility. Our newly acquired data clearly show that models currently available in the literature cannot be applied to the whole reduction range of clay structural Fe. Moreover, changes in the model normalising procedure clearly demonstrate that the investigated low

  5. A Model for Subglacial Flooding Along a Pre-Existing Hydrological Network during the Rapid Drainage of Supraglacial Lakes

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Tsai, V. C.

    2014-12-01

    Increasingly large numbers of supraglacial lakes form and drain every summer on the Greenland Ice Sheet. Presently, about 15% of the lakes drain rapidly within the timescale of a few hours, and the vertical discharge of water during these events may find a pre-existing subglacial hydrological network, particularly late in the melt season. Here, we present a model for subglacial flooding applied specifically to such circumstances. Given the short timescale of events, we treat ice and bed as purely elastic and assume that the fluid flow in the subglacial conduit is fully turbulent. We evaluate the effect of initial conduit opening, wi, on the rate of flood propagation and along-flow profiles of field variables. We find that floods propagate much faster, particularly in early times, for larger wi. For wi = 10 and 1 cm, for example, floods travel about 68% and 50% farther than in the fully coupled ice/bed scenario after 2 hours of drainage, respectively. Irrespective of the magnitude of wi, we also find that there exists a region of positive pressure gradient. This reversal of pressure gradient draws water in from the farfield and causes the conduit to narrow, respecting mass continuity. While the general shape of the profiles appears similar, greater conduit opening is found for larger wi. For wi = 10 and 1 cm, for example, the elastostatic conduit opening at the point of injection is about 1.39 and 1.26 times that of the fully coupled ice/bed scenario after 2 hours of drainage. The hypothesis of a pre-existing thin film of water is consistent with the spirit of contemporary state-of-the-art continuum models for subglacial hydrology. This also results in avoiding the pressure singularity, which is inherent in classical hydro-fracture models applied to fully coupled ice/bed scenarios, thus opening an avenue for integrating the likes of our model within continuum hydrological models. Furthermore, we foresee that the theory presented can be used to potentially infer

  6. Can we better use existing and emerging computing hardware to embed activity coefficient predictions in complex atmospheric aerosol models?

    NASA Astrophysics Data System (ADS)

    Topping, David; Alibay, Irfan; Ruske, Simon; Hindriksen, Vincent; Noisternig, Michael

    2016-04-01

    To predict the evolving concentration, chemical composition and ability of aerosol particles to act as cloud droplets, we rely on numerical modeling. Mechanistic models attempt to account for the movement of compounds between the gaseous and condensed phases at a molecular level. This 'bottom up' approach is designed to increase our fundamental understanding. However, such models rely on predicting the properties of molecules and subsequent mixtures. For partitioning between the gaseous and condensed phases this includes: saturation vapour pressures; Henrys law coefficients; activity coefficients; diffusion coefficients and reaction rates. Current gas phase chemical mechanisms predict the existence of potentially millions of individual species. Within a dynamic ensemble model, this can often be used as justification for neglecting computationally expensive process descriptions. Indeed, on whether we can quantify the true sensitivity to uncertainties in molecular properties, even at the single aerosol particle level it has been impossible to embed fully coupled representations of process level knowledge with all possible compounds, typically relying on heavily parameterised descriptions. Relying on emerging numerical frameworks, and designed for the changing landscape of high-performance computing (HPC), in this study we show that comprehensive microphysical models from single particle to larger scales can be developed to encompass a complete state-of-the-art knowledge of aerosol chemical and process diversity. We focus specifically on the ability to capture activity coefficients in liquid solutions using the UNIFAC method, profiling traditional coding strategies and those that exploit emerging hardware.

  7. Linking antisocial behavior, substance use, and personality: an integrative quantitative model of the adult externalizing spectrum.

    PubMed

    Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Benning, Stephen D; Kramer, Mark D

    2007-11-01

    Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena.

  8. Linking Antisocial Behavior, Substance Use, and Personality: An Integrative Quantitative Model of the Adult Externalizing Spectrum

    PubMed Central

    Krueger, Robert F.; Markon, Kristian E.; Patrick, Christopher J.; Benning, Stephen D.; Kramer, Mark D.

    2008-01-01

    Antisocial behavior, substance use, and impulsive and aggressive personality traits often co-occur, forming a coherent spectrum of personality and psychopathology. In the current research, the authors developed a novel quantitative model of this spectrum. Over 3 waves of iterative data collection, 1,787 adult participants selected to represent a range across the externalizing spectrum provided extensive data about specific externalizing behaviors. Statistical methods such as item response theory and semiparametric factor analysis were used to model these data. The model and assessment instrument that emerged from the research shows how externalizing phenomena are organized hierarchically and cover a wide range of individual differences. The authors discuss the utility of this model for framing research on the correlates and the etiology of externalizing phenomena. PMID:18020714

  9. Systematic Analysis of Quantitative Logic Model Ensembles Predicts Drug Combination Effects on Cell Signaling Networks

    PubMed Central

    Morris, MK; Clarke, DC; Osimiri, LC

    2016-01-01

    A major challenge in developing anticancer therapies is determining the efficacies of drugs and their combinations in physiologically relevant microenvironments. We describe here our application of “constrained fuzzy logic” (CFL) ensemble modeling of the intracellular signaling network for predicting inhibitor treatments that reduce the phospho‐levels of key transcription factors downstream of growth factors and inflammatory cytokines representative of hepatocellular carcinoma (HCC) microenvironments. We observed that the CFL models successfully predicted the effects of several kinase inhibitor combinations. Furthermore, the ensemble predictions revealed ambiguous predictions that could be traced to a specific structural feature of these models, which we resolved with dedicated experiments, finding that IL‐1α activates downstream signals through TAK1 and not MEKK1 in HepG2 cells. We conclude that CFL‐Q2LM (Querying Quantitative Logic Models) is a promising approach for predicting effective anticancer drug combinations in cancer‐relevant microenvironments. PMID:27567007

  10. Efficient Generation and Selection of Virtual Populations in Quantitative Systems Pharmacology Models

    PubMed Central

    Rieger, TR; Musante, CJ

    2016-01-01

    Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777

  11. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model

    PubMed Central

    Yoshioka, S.; Matsuhana, B.; Tanaka, S.; Inouye, Y.; Oshima, N.; Kinoshita, S.

    2011-01-01

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model. PMID:20554565

  12. [A NIR qualitative and quantitative model of 8 kinds of carbonate-containing mineral Chinese medicines].

    PubMed

    Yuan, Ming-Yang; Huang, Bi-Sheng; Yu, Chi; Liu, Yi-Mei; Chen, Ke-Li

    2014-01-01

    The aim of this paper is to apply near infrared spectroscopy techniques to construct a rapid identification method for 8 kinds of mineral Chinese Medicines containing carbonates. The qualitative model using clustering analysis method in OPUS software can identify accurately 8 kinds of carbonate-containing mineral Chinese medicines. The near-infrared quantitative model was established by using partial least squares method (PLS) for 7 mineral Chinese Medicines in which main component is calcium carbonate. Compared with the results by EDTA titration, the established quantitative analysis model for calcium carbonate content showed a good prediction result that when the content is between 47.61% -99.17%, the average relative deviation of the prediction result is 0.24% and the average recovery rate was 100.3%. The results also showed that the model using near infrared spectroscopy can get not only a rapid identification of the 8 mineral Chinese medicines containing carbonates, but also an accurate and reliabe content determination of calcium carbonate for the 7 mineral Chinese medicines which contain the component.

  13. Growth mixture modeling as an exploratory analysis tool in longitudinal quantitative trait loci analysis.

    PubMed

    Chang, Su-Wei; Choi, Seung Hoan; Li, Ke; Fleur, Rose Saint; Huang, Chengrui; Shen, Tong; Ahn, Kwangmi; Gordon, Derek; Kim, Wonkuk; Wu, Rongling; Mendell, Nancy R; Finch, Stephen J

    2009-12-15

    We examined the properties of growth mixture modeling in finding longitudinal quantitative trait loci in a genome-wide association study. Two software packages are commonly used in these analyses: Mplus and the SAS TRAJ procedure. We analyzed the 200 replicates of the simulated data with these programs using three tests: the likelihood-ratio test statistic, a direct test of genetic model coefficients, and the chi-square test classifying subjects based on the trajectory model's posterior Bayesian probability. The Mplus program was not effective in this application due to its computational demands. The distributions of these tests applied to genes not related to the trait were sensitive to departures from Hardy-Weinberg equilibrium. The likelihood-ratio test statistic was not usable in this application because its distribution was far from the expected asymptotic distributions when applied to markers with no genetic relation to the quantitative trait. The other two tests were satisfactory. Power was still substantial when we used markers near the gene rather than the gene itself. That is, growth mixture modeling may be useful in genome-wide association studies. For markers near the actual gene, there was somewhat greater power for the direct test of the coefficients and lesser power for the posterior Bayesian probability chi-square test.

  14. Double surface effect causes a peak in band-edge photocurrent spectra: a quantitative model

    NASA Astrophysics Data System (ADS)

    Turkulets, Yury; Bick, Tamar; Shalish, Ilan

    2016-09-01

    Band-edge photocurrent spectra are typically observed in either of two shapes: a peak or a step. In this study, we show that the photocurrent band-edge response of a GaN layer forms a peak, while the same response in GaN nanowires takes the form of a step, and both are red-shifted to the actual band-edge energy. This apparent inconsistency is not limited to GaN. The physics of this phenomenon has been unclear. To understand the physics behind these observations, we propose a model that explains the apparent discrepancy as resulting from a structure-dependent surface effect. To test the model, we experiment with a GaAs layer, showing that we can deliberately switch between a step and a peak. We use GaAs because it is available at a semi-insulating doping level. We demonstrate that using this quantitative model one may obtain the exact band-edge transition energy, regardless of the red-shift variance, as well as the density of the surface state charges that cause the red shift. The model thus adds quantitative features to photocurrent spectroscopy.

  15. Mechanism of variable structural colour in the neon tetra: quantitative evaluation of the Venetian blind model.

    PubMed

    Yoshioka, S; Matsuhana, B; Tanaka, S; Inouye, Y; Oshima, N; Kinoshita, S

    2011-01-01

    The structural colour of the neon tetra is distinguishable from those of, e.g., butterfly wings and bird feathers, because it can change in response to the light intensity of the surrounding environment. This fact clearly indicates the variability of the colour-producing microstructures. It has been known that an iridophore of the neon tetra contains a few stacks of periodically arranged light-reflecting platelets, which can cause multilayer optical interference phenomena. As a mechanism of the colour variability, the Venetian blind model has been proposed, in which the light-reflecting platelets are assumed to be tilted during colour change, resulting in a variation in the spacing between the platelets. In order to quantitatively evaluate the validity of this model, we have performed a detailed optical study of a single stack of platelets inside an iridophore. In particular, we have prepared a new optical system that can simultaneously measure both the spectrum and direction of the reflected light, which are expected to be closely related to each other in the Venetian blind model. The experimental results and detailed analysis are found to quantitatively verify the model.

  16. Four-Point Bending as a Method for Quantitatively Evaluating Spinal Arthrodesis in a Rat Model

    PubMed Central

    Robinson, Samuel T; Svet, Mark T; Kanim, Linda A; Metzger, Melodie F

    2015-01-01

    The most common method of evaluating the success (or failure) of rat spinal fusion procedures is manual palpation testing. Whereas manual palpation provides only a subjective binary answer (fused or not fused) regarding the success of a fusion surgery, mechanical testing can provide more quantitative data by assessing variations in strength among treatment groups. We here describe a mechanical testing method to quantitatively assess single-level spinal fusion in a rat model, to improve on the binary and subjective nature of manual palpation as an end point for fusion-related studies. We tested explanted lumbar segments from Sprague–Dawley rat spines after single-level posterolateral fusion procedures at L4–L5. Segments were classified as ‘not fused,’ ‘restricted motion,’ or ‘fused’ by using manual palpation testing. After thorough dissection and potting of the spine, 4-point bending in flexion then was applied to the L4–L5 motion segment, and stiffness was measured as the slope of the moment–displacement curve. Results demonstrated statistically significant differences in stiffness among all groups, which were consistent with preliminary grading according to manual palpation. In addition, the 4-point bending results provided quantitative information regarding the quality of the bony union formed and therefore enabled the comparison of fused specimens. Our results demonstrate that 4-point bending is a simple, reliable, and effective way to describe and compare results among rat spines after fusion surgery. PMID:25730756

  17. A Quantitative Model of Keyhole Instability Induced Porosity in Laser Welding of Titanium Alloy

    NASA Astrophysics Data System (ADS)

    Pang, Shengyong; Chen, Weidong; Wang, Wen

    2014-06-01

    Quantitative prediction of the porosity defects in deep penetration laser welding has generally been considered as a very challenging task. In this study, a quantitative model of porosity defects induced by keyhole instability in partial penetration CO2 laser welding of a titanium alloy is proposed. The three-dimensional keyhole instability, weld pool dynamics, and pore formation are determined by direct numerical simulation, and the results are compared to prior experimental results. It is shown that the simulated keyhole depth fluctuations could represent the variation trends in the number and average size of pores for the studied process conditions. Moreover, it is found that it is possible to use the predicted keyhole depth fluctuations as a quantitative measure of the average size of porosity. The results also suggest that due to the shadowing effect of keyhole wall humps, the rapid cooling of the surface of the keyhole tip before keyhole collapse could lead to a substantial decrease in vapor pressure inside the keyhole tip, which is suggested to be the mechanism by which shielding gas enters into the porosity.

  18. Mathematical model of the Tat-Rev regulation of HIV-1 replication in an activated cell predicts the existence of oscillatory dynamics in the synthesis of viral components

    PubMed Central

    2014-01-01

    analyzed alternative hypotheses for the re-cycling of the Rev proteins both in the cytoplasm and the nuclear pore complex. Conclusions The quantitative mathematical model of the Tat-Rev regulation of HIV-1 replication predicts the existence of oscillatory dynamics which depends on the efficacy of the Tat and TAR interaction as well as on the Rev-mediated transport processes. The biological relevance of the oscillatory regimes for the HIV-1 life cycle is discussed. PMID:25564443

  19. Evolution of a fold-thrust belt deforming a unit with pre-existing linear asperities: Insights from analog models

    NASA Astrophysics Data System (ADS)

    Burberry, Caroline M.; Swiatlowski, Jerlyn L.

    2016-06-01

    Heterogeneity, whether geometric or rheologic, in crustal material undergoing compression affects the geometry of the structures produced. This study documents the thrust fault geometries produced when discrete linear asperities are introduced into an analog model, scaled to represent bulk upper crustal properties, and compressed. Varying obliquities of the asperities are used, relative to the imposed compression, and the resultant development of thrust fault traces and branch lines in map view is tracked. Once the model runs are completed, cross-sections are created and analyzed. The models show that asperities confined to the base layer promote the clustering of branch lines in the surface thrusts. Strong clustering in branch lines is also noted where several asperities are in close proximity or cross. Slight reverse-sense reactivation of asperities cut through the sedimentary sequence is noted in cross-section, where the asperity and the subsequent thrust belt interact. The model results are comparable to the situation in the Dinaric Alps, where pre-existing faults to the SW of the NE Adriatic Fault Zone contribute to the clustering of branch lines developed in the surface fold-thrust belt. These results can therefore be used to evaluate the evolution of other basement-involved fold-thrust belts worldwide.

  20. Synthetic cannabinoids: In silico prediction of the cannabinoid receptor 1 affinity by a quantitative structure-activity relationship model.

    PubMed

    Paulke, Alexander; Proschak, Ewgenij; Sommer, Kai; Achenbach, Janosch; Wunder, Cora; Toennes, Stefan W

    2016-03-14

    The number of new synthetic psychoactive compounds increase steadily. Among the group of these psychoactive compounds, the synthetic cannabinoids (SCBs) are most popular and serve as a substitute of herbal cannabis. More than 600 of these substances already exist. For some SCBs the in vitro cannabinoid receptor 1 (CB1) affinity is known, but for the majority it is unknown. A quantitative structure-activity relationship (QSAR) model was developed, which allows the determination of the SCBs affinity to CB1 (expressed as binding constant (Ki)) without reference substances. The chemically advance template search descriptor was used for vector representation of the compound structures. The similarity between two molecules was calculated using the Feature-Pair Distribution Similarity. The Ki values were calculated using the Inverse Distance Weighting method. The prediction model was validated using a cross validation procedure. The predicted Ki values of some new SCBs were in a range between 20 (considerably higher affinity to CB1 than THC) to 468 (considerably lower affinity to CB1 than THC). The present QSAR model can serve as a simple, fast and cheap tool to get a first hint of the biological activity of new synthetic cannabinoids or of other new psychoactive compounds.

  1. Quantitative model for the generic 3D shape of ICMEs at 1 AU

    NASA Astrophysics Data System (ADS)

    Démoulin, P.; Janvier, M.; Masías-Meza, J. J.; Dasso, S.

    2016-10-01

    Context. Interplanetary imagers provide 2D projected views of the densest plasma parts of interplanetary coronal mass ejections (ICMEs), while in situ measurements provide magnetic field and plasma parameter measurements along the spacecraft trajectory, that is, along a 1D cut. The data therefore only give a partial view of the 3D structures of ICMEs. Aims: By studying a large number of ICMEs, crossed at different distances from their apex, we develop statistical methods to obtain a quantitative generic 3D shape of ICMEs. Methods: In a first approach we theoretically obtained the expected statistical distribution of the shock-normal orientation from assuming simple models of 3D shock shapes, including distorted profiles, and compared their compatibility with observed distributions. In a second approach we used the shock normal and the flux rope axis orientations together with the impact parameter to provide statistical information across the spacecraft trajectory. Results: The study of different 3D shock models shows that the observations are compatible with a shock that is symmetric around the Sun-apex line as well as with an asymmetry up to an aspect ratio of around 3. Moreover, flat or dipped shock surfaces near their apex can only be rare cases. Next, the sheath thickness and the ICME velocity have no global trend along the ICME front. Finally, regrouping all these new results and those of our previous articles, we provide a quantitative ICME generic 3D shape, including the global shape of the shock, the sheath, and the flux rope. Conclusions: The obtained quantitative generic ICME shape will have implications for several aims. For example, it constrains the output of typical ICME numerical simulations. It is also a base for studying the transport of high-energy solar and cosmic particles during an ICME propagation as well as for modeling and forecasting space weather conditions near Earth.

  2. Satellite contributions to the quantitative characterization of biomass burning for climate modeling

    NASA Astrophysics Data System (ADS)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-07-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  3. Experimental model considerations for the study of protein-energy malnutrition co-existing with ischemic brain injury.

    PubMed

    Prosser-Loose, Erin J; Smith, Shari E; Paterson, Phyllis G

    2011-05-01

    Protein-energy malnutrition (PEM) affects ~16% of patients at admission for stroke. We previously modeled this in a gerbil global cerebral ischemia model and found that PEM impairs functional outcome and influences mechanisms of ischemic brain injury and recovery. Since this model is no longer reliable, we investigated the utility of the rat 2-vessel occlusion (2-VO) with hypotension model of global ischemia for further study of this clinical problem. Male, Sprague-Dawley rats were exposed to either control diet (18% protein) or PEM induced by feeding a low protein diet (2% protein) for 7d prior to either global ischemia or sham surgery. PEM did not significantly alter the hippocampal CA1 neuron death (p = 0.195 by 2-factor ANOVA) or the increase in dendritic injury caused by exposure to global ischemia. Unexpectedly, however, a strong trend was evident for PEM to decrease the consistency of hippocampal damage, as shown by an increased incidence of unilateral or no hippocampal damage (p=0.069 by chi-square analysis). Although PEM caused significant changes to baseline arterial blood pH, pO(2), pCO(2), and fasting glucose (p<0.05), none of these variables (nor hematocrit) correlated significantly with CA1 cell counts in the malnourished group exposed to 2-VO (p>0.269). Intra-ischemic tympanic temperature and blood pressure were strictly and equally controlled between ischemic groups. We conclude that co-existing PEM confounded the consistency of hippocampal injury in the 2-VO model. Although the mechanisms responsible were not identified, this model of brain ischemia should not be used for studying this co-morbidity factor.

  4. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    PubMed

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html). PMID:17973343

  5. Quantitative evaluation of lake eutrophication responses under alternative water diversion scenarios: a water quality modeling based statistical analysis approach.

    PubMed

    Liu, Yong; Wang, Yilin; Sheng, Hu; Dong, Feifei; Zou, Rui; Zhao, Lei; Guo, Huaicheng; Zhu, Xiang; He, Bin

    2014-01-15

    China is confronting the challenge of accelerated lake eutrophication, where Lake Dianchi is considered as the most serious one. Eutrophication control for Lake Dianchi began in the mid-1980s. However, decision makers have been puzzled by the lack of visible water quality response to past efforts given the tremendous investment. Therefore, decision makers desperately need a scientifically sound way to quantitatively evaluate the response of lake water quality to proposed management measures and engineering works. We used a water quality modeling based scenario analysis approach to quantitatively evaluate the eutrophication responses of Lake Dianchi to an under-construction water diversion project. The primary analytic framework was built on a three-dimensional hydrodynamic, nutrient fate and transport, as well as algae dynamics model, which has previously been calibrated and validated using historical data. We designed 16 scenarios to analyze the water quality effects of three driving forces, including watershed nutrient loading, variations in diverted inflow water, and lake water level. A two-step statistical analysis consisting of an orthogonal test analysis and linear regression was then conducted to distinguish the contributions of various driving forces to lake water quality. The analysis results show that (a) the different ways of managing the diversion projects would result in different water quality response in Lake Dianchi, though the differences do not appear to be significant; (b) the maximum reduction in annual average and peak Chl-a concentration from the various ways of diversion project operation are respectively 11% and 5%; (c) a combined 66% watershed load reduction and water diversion can eliminate the lake hypoxia volume percentage from the existing 6.82% to 3.00%; and (d) the water diversion will decrease the occurrence of algal blooms, and the effect of algae reduction can be enhanced if diverted water are seasonally allocated such that wet

  6. Quantitative evaluation of lake eutrophication responses under alternative water diversion scenarios: a water quality modeling based statistical analysis approach.

    PubMed

    Liu, Yong; Wang, Yilin; Sheng, Hu; Dong, Feifei; Zou, Rui; Zhao, Lei; Guo, Huaicheng; Zhu, Xiang; He, Bin

    2014-01-15

    China is confronting the challenge of accelerated lake eutrophication, where Lake Dianchi is considered as the most serious one. Eutrophication control for Lake Dianchi began in the mid-1980s. However, decision makers have been puzzled by the lack of visible water quality response to past efforts given the tremendous investment. Therefore, decision makers desperately need a scientifically sound way to quantitatively evaluate the response of lake water quality to proposed management measures and engineering works. We used a water quality modeling based scenario analysis approach to quantitatively evaluate the eutrophication responses of Lake Dianchi to an under-construction water diversion project. The primary analytic framework was built on a three-dimensional hydrodynamic, nutrient fate and transport, as well as algae dynamics model, which has previously been calibrated and validated using historical data. We designed 16 scenarios to analyze the water quality effects of three driving forces, including watershed nutrient loading, variations in diverted inflow water, and lake water level. A two-step statistical analysis consisting of an orthogonal test analysis and linear regression was then conducted to distinguish the contributions of various driving forces to lake water quality. The analysis results show that (a) the different ways of managing the diversion projects would result in different water quality response in Lake Dianchi, though the differences do not appear to be significant; (b) the maximum reduction in annual average and peak Chl-a concentration from the various ways of diversion project operation are respectively 11% and 5%; (c) a combined 66% watershed load reduction and water diversion can eliminate the lake hypoxia volume percentage from the existing 6.82% to 3.00%; and (d) the water diversion will decrease the occurrence of algal blooms, and the effect of algae reduction can be enhanced if diverted water are seasonally allocated such that wet

  7. Quantitative Simulations of MST Visual Receptive Field Properties Using a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, J. A.

    1997-01-01

    We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.

  8. Development of quantitative interspecies toxicity relationship modeling of chemicals to fish.

    PubMed

    Fatemi, M H; Mousa Shahroudi, E; Amini, Z

    2015-09-01

    In this work, quantitative interspecies-toxicity relationship methodologies were used to improve the prediction power of interspecies toxicity model. The most relevant descriptors selected by stepwise multiple linear regressions and toxicity of chemical to Daphnia magna were used to predict the toxicities of chemicals to fish. Modeling methods that were used for developing linear and nonlinear models were multiple linear regression (MLR), random forest (RF), artificial neural network (ANN) and support vector machine (SVM). The obtained results indicate the superiority of SVM model over other models. Robustness and reliability of the constructed SVM model were evaluated by using the leave-one-out cross-validation method (Q(2)=0.69, SPRESS=0.822) and Y-randomization test (R(2)=0.268 for 30 trail). Furthermore, the chemical applicability domains of these models were determined via leverage approach. The developed SVM model was used for the prediction of toxicity of 46 compounds that their experimental toxicities to a fish were not being reported earlier from their toxicities to D. magna and relevant molecular descriptors.

  9. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    NASA Astrophysics Data System (ADS)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  10. Toxicity challenges in environmental chemicals: Prediction of human plasma protein binding through quantitative structure-activity relationship (QSAR) models

    EPA Science Inventory

    The present study explores the merit of utilizing available pharmaceutical data to construct a quantitative structure-activity relationship (QSAR) for prediction of the fraction of a chemical unbound to plasma protein (Fub) in environmentally relevant compounds. Independent model...

  11. A Framework for Quantitative Modeling of Neural Circuits Involved in Sleep-to-Wake Transition

    PubMed Central

    Sorooshyari, Siamak; Huerta, Ramón; de Lecea, Luis

    2015-01-01

    Identifying the neuronal circuits and dynamics of sleep-to-wake transition is essential to understanding brain regulation of behavioral states, including sleep–wake cycles, arousal, and hyperarousal. Recent work by different laboratories has used optogenetics to determine the role of individual neuromodulators in state transitions. The optogenetically driven data do not yet provide a multi-dimensional schematic of the mechanisms underlying changes in vigilance states. This work presents a modeling framework to interpret, assist, and drive research on the sleep-regulatory network. We identify feedback, redundancy, and gating hierarchy as three fundamental aspects of this model. The presented model is expected to expand as additional data on the contribution of each transmitter to a vigilance state becomes available. Incorporation of conductance-based models of neuronal ensembles into this model and existing models of cortical excitability will provide more comprehensive insight into sleep dynamics as well as sleep and arousal-related disorders. PMID:25767461

  12. Bifurcation analysis of an existing mathematical model reveals novel treatment strategies and suggests potential cure for type 1 diabetes.

    PubMed

    Nielsen, Kenneth H M; Pociot, Flemming M; Ottesen, Johnny T

    2014-09-01

    Type 1 diabetes is a disease with serious personal and socioeconomic consequences that has attracted the attention of modellers recently. But as models of this disease tend to be complicated, there has been only limited mathematical analysis to date. Here we address this problem by providing a bifurcation analysis of a previously published mathematical model for the early stages of type 1 diabetes in diabetes-prone NOD mice, which is based on the data available in the literature. We also show positivity and the existence of a family of attracting trapping regions in the positive 5D cone, converging towards a smaller trapping region, which is the intersection over the family. All these trapping regions are compact sets, and thus, practical weak persistence is guaranteed. We conclude our analysis by proposing 4 novel treatment strategies: increasing the phagocytic ability of resting macrophages or activated macrophages, increasing the phagocytic ability of resting and activated macrophages simultaneously and lastly, adding additional macrophages to the site of inflammation. The latter seems counter-intuitive at first glance, but nevertheless it appears to be the most promising, as evidenced by recent results.

  13. A new scoring system for the chances of identifying a BRCA1/2 mutation outperforms existing models including BRCAPRO

    PubMed Central

    Evans, D; Eccles, D; Rahman, N; Young, K; Bulman, M; Amir, E; Shenton, A; Howell, A; Lalloo, F

    2004-01-01

    Methods: DNA samples from affected subjects from 422 non-Jewish families with a history of breast and/or ovarian cancer were screened for BRCA1 mutations and a subset of 318 was screened for BRCA2 by whole gene screening techniques. Using a combination of results from screening and the family history of mutation negative and positive kindreds, a simple scoring system (Manchester scoring system) was devised to predict pathogenic mutations and particularly to discriminate at the 10% likelihood level. A second separate dataset of 192 samples was subsequently used to test the model's predictive value. This was further validated on a third set of 258 samples and compared against existing models. Results: The scoring system includes a cut-off at 10 points for each gene. This equates to >10% probability of a pathogenic mutation in BRCA1 and BRCA2 individually. The Manchester scoring system had the best trade-off between sensitivity and specificity at 10% prediction for the presence of mutations as shown by its highest C-statistic and was far superior to BRCAPRO. Conclusion: The scoring system is useful in identifying mutations particularly in BRCA2. The algorithm may need modifying to include pathological data when calculating whether to screen for BRCA1 mutations. It is considerably less time-consuming for clinicians than using computer models and if implemented routinely in clinical practice will aid in selecting families most suitable for DNA sampling for diagnostic testing. PMID:15173236

  14. A threshold of mechanical strain intensity for the direct activation of osteoblast function exists in a murine maxilla loading model.

    PubMed

    Suzuki, Natsuki; Aoki, Kazuhiro; Marcián, Petr; Borák, Libor; Wakabayashi, Noriyuki

    2016-10-01

    The response to the mechanical loading of bone tissue has been extensively investigated; however, precisely how much strain intensity is necessary to promote bone formation remains unclear. Combination studies utilizing histomorphometric and numerical analyses were performed using the established murine maxilla loading model to clarify the threshold of mechanical strain needed to accelerate bone formation activity. For 7 days, 191 kPa loading stimulation for 30 min/day was applied to C57BL/6J mice. Two regions of interest, the AWAY region (away from the loading site) and the NEAR region (near the loading site), were determined. The inflammatory score increased in the NEAR region, but not in the AWAY region. A strain intensity map obtained from [Formula: see text] images was superimposed onto the images of the bone formation inhibitor, sclerostin-positive cell localization. The number of sclerostin-positive cells significantly decreased after mechanical loading of more than [Formula: see text] in the AWAY region, but not in the NEAR region. The mineral apposition rate, which shows the bone formation ability of osteoblasts, was accelerated at the site of surface strain intensity, namely around [Formula: see text], but not at the site of lower surface strain intensity, which was around [Formula: see text] in the AWAY region, thus suggesting the existence of a strain intensity threshold for promoting bone formation. Taken together, our data suggest that a threshold of mechanical strain intensity for the direct activation of osteoblast function and the reduction of sclerostin exists in a murine maxilla loading model in the non-inflammatory region.

  15. Numerical modeling of the seismic response of a large pre-existing landslide in the Marmara region

    NASA Astrophysics Data System (ADS)

    Bourdeau, Céline; Lenti, Luca; Martino, Salvatore

    2015-04-01

    Turkey is one of the geologically most active regions of Europe prone to natural hazards in particular earthquakes and landslides. Detailed seismological studies show that a catastrophic event is now expected in the Marmara region along the North Anatolian Fault Zone (NAFZ). On the shores of the Marmara sea, about 30km East of Istanbul and 15km North from the NAFZ, urbanization is fastly growing despite the presence of pre-existing large landslides. Whether such landslides could be reactivated under seismic shaking is a key question. In the framework of the MARsite European project, we selected one of the most critical landslides namely the Büyükçekmece landslide in order to assess its local seismic response. Based on detailed geophysical and geotechnical field investigations, a high-resolution engineering-geological model of the landslide slope was reconstructed. A numerical modeling was carried out on a longitudinal cross section of this landslide with a 2D finite difference code FLAC in order to assess the local seismic response of the slope and to evaluate the consistency of conditions suitable for the earthquake-induced reactivation of the landslide. The obtained ground-motion amplification pattern along the slope surface is very complex and is strongly influenced by properties changes between the pre-existing landslide mass and the surrounding material. Further comparisons of 2D versus 1D ground-motion amplifications on the one hand and 2D versus topographic site effects on the other hand will shed light on the parameters controlling the spatial variations of ground-motion amplifications along the slope surface.

  16. Mathematical Modelling of a Brain Tumour Initiation and Early Development: A Coupled Model of Glioblastoma Growth, Pre-Existing Vessel Co-Option, Angiogenesis and Blood Perfusion.

    PubMed

    Cai, Yan; Wu, Jie; Li, Zhiyong; Long, Quan

    2016-01-01

    We propose a coupled mathematical modelling system to investigate glioblastoma growth in response to dynamic changes in chemical and haemodynamic microenvironments caused by pre-existing vessel co-option, remodelling, collapse and angiogenesis. A typical tree-like architecture network with different orders for vessel diameter is designed to model pre-existing vasculature in host tissue. The chemical substances including oxygen, vascular endothelial growth factor, extra-cellular matrix and matrix degradation enzymes are calculated based on the haemodynamic environment which is obtained by coupled modelling of intravascular blood flow with interstitial fluid flow. The haemodynamic changes, including vessel diameter and permeability, are introduced to reflect a series of pathological characteristics of abnormal tumour vessels including vessel dilation, leakage, angiogenesis, regression and collapse. Migrating cells are included as a new phenotype to describe the migration behaviour of malignant tumour cells. The simulation focuses on the avascular phase of tumour development and stops at an early phase of angiogenesis. The model is able to demonstrate the main features of glioblastoma growth in this phase such as the formation of pseudopalisades, cell migration along the host vessels, the pre-existing vasculature co-option, angiogenesis and remodelling. The model also enables us to examine the influence of initial conditions and local environment on the early phase of glioblastoma growth.

  17. Mathematical Modelling of a Brain Tumour Initiation and Early Development: A Coupled Model of Glioblastoma Growth, Pre-Existing Vessel Co-Option, Angiogenesis and Blood Perfusion

    PubMed Central

    Cai, Yan; Wu, Jie; Li, Zhiyong; Long, Quan

    2016-01-01

    We propose a coupled mathematical modelling system to investigate glioblastoma growth in response to dynamic changes in chemical and haemodynamic microenvironments caused by pre-existing vessel co-option, remodelling, collapse and angiogenesis. A typical tree-like architecture network with different orders for vessel diameter is designed to model pre-existing vasculature in host tissue. The chemical substances including oxygen, vascular endothelial growth factor, extra-cellular matrix and matrix degradation enzymes are calculated based on the haemodynamic environment which is obtained by coupled modelling of intravascular blood flow with interstitial fluid flow. The haemodynamic changes, including vessel diameter and permeability, are introduced to reflect a series of pathological characteristics of abnormal tumour vessels including vessel dilation, leakage, angiogenesis, regression and collapse. Migrating cells are included as a new phenotype to describe the migration behaviour of malignant tumour cells. The simulation focuses on the avascular phase of tumour development and stops at an early phase of angiogenesis. The model is able to demonstrate the main features of glioblastoma growth in this phase such as the formation of pseudopalisades, cell migration along the host vessels, the pre-existing vasculature co-option, angiogenesis and remodelling. The model also enables us to examine the influence of initial conditions and local environment on the early phase of glioblastoma growth. PMID:26934465

  18. Mathematical Modelling of a Brain Tumour Initiation and Early Development: A Coupled Model of Glioblastoma Growth, Pre-Existing Vessel Co-Option, Angiogenesis and Blood Perfusion.

    PubMed

    Cai, Yan; Wu, Jie; Li, Zhiyong; Long, Quan

    2016-01-01

    We propose a coupled mathematical modelling system to investigate glioblastoma growth in response to dynamic changes in chemical and haemodynamic microenvironments caused by pre-existing vessel co-option, remodelling, collapse and angiogenesis. A typical tree-like architecture network with different orders for vessel diameter is designed to model pre-existing vasculature in host tissue. The chemical substances including oxygen, vascular endothelial growth factor, extra-cellular matrix and matrix degradation enzymes are calculated based on the haemodynamic environment which is obtained by coupled modelling of intravascular blood flow with interstitial fluid flow. The haemodynamic changes, including vessel diameter and permeability, are introduced to reflect a series of pathological characteristics of abnormal tumour vessels including vessel dilation, leakage, angiogenesis, regression and collapse. Migrating cells are included as a new phenotype to describe the migration behaviour of malignant tumour cells. The simulation focuses on the avascular phase of tumour development and stops at an early phase of angiogenesis. The model is able to demonstrate the main features of glioblastoma growth in this phase such as the formation of pseudopalisades, cell migration along the host vessels, the pre-existing vasculature co-option, angiogenesis and remodelling. The model also enables us to examine the influence of initial conditions and local environment on the early phase of glioblastoma growth. PMID:26934465

  19. Satellite Contributions to the Quantitative Characterization of Biomass Burning for Climate Modeling

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kahn, Ralph; Chin, Mian

    2012-01-01

    Characterization of biomass burning from space has been the subject of an extensive body of literature published over the last few decades. Given the importance of this topic, we review how satellite observations contribute toward improving the representation of biomass burning quantitatively in climate and air-quality modeling and assessment. Satellite observations related to biomass burning may be classified into five broad categories: (i) active fire location and energy release, (ii) burned areas and burn severity, (iii) smoke plume physical disposition, (iv) aerosol distribution and particle properties, and (v) trace gas concentrations. Each of these categories involves multiple parameters used in characterizing specific aspects of the biomass-burning phenomenon. Some of the parameters are merely qualitative, whereas others are quantitative, although all are essential for improving the scientific understanding of the overall distribution (both spatial and temporal) and impacts of biomass burning. Some of the qualitative satellite datasets, such as fire locations, aerosol index, and gas estimates have fairly long-term records. They date back as far as the 1970s, following the launches of the DMSP, Landsat, NOAA, and Nimbus series of earth observation satellites. Although there were additional satellite launches in the 1980s and 1990s, space-based retrieval of quantitative biomass burning data products began in earnest following the launch of Terra in December 1999. Starting in 2000, fire radiative power, aerosol optical thickness and particle properties over land, smoke plume injection height and profile, and essential trace gas concentrations at improved resolutions became available. The 2000s also saw a large list of other new satellite launches, including Aqua, Aura, Envisat, Parasol, and CALIPSO, carrying a host of sophisticated instruments providing high quality measurements of parameters related to biomass burning and other phenomena. These improved data

  20. Influence of mom and dad: quantitative genetic models for maternal effects and genomic imprinting.

    PubMed

    Santure, Anna W; Spencer, Hamish G

    2006-08-01

    The expression of an imprinted gene is dependent on the sex of the parent it was inherited from, and as a result reciprocal heterozygotes may display different phenotypes. In contrast, maternal genetic terms arise when the phenotype of an offspring is influenced by the phenotype of its mother beyond the direct inheritance of alleles. Both maternal effects and imprinting may contribute to resemblance between offspring of the same mother. We demonstrate that two standard quantitative genetic models for deriving breeding values, population variances and covariances between relatives, are not equivalent when maternal genetic effects and imprinting are acting. Maternal and imprinting effects introduce both sex-dependent and generation-dependent effects that result in differences in the way additive and dominance effects are defined for the two approaches. We use a simple example to demonstrate that both imprinting and maternal genetic effects add extra terms to covariances between relatives and that model misspecification may over- or underestimate true covariances or lead to extremely variable parameter estimation. Thus, an understanding of various forms of parental effects is essential in correctly estimating quantitative genetic variance components. PMID:16751674

  1. Understanding responder neurobiology in schizophrenia using a quantitative systems pharmacology model: application to iloperidone.

    PubMed

    Geerts, Hugo; Roberts, Patrick; Spiros, Athan; Potkin, Steven

    2015-04-01

    The concept of targeted therapies remains a holy grail for the pharmaceutical drug industry for identifying responder populations or new drug targets. Here we provide quantitative systems pharmacology as an alternative to the more traditional approach of retrospective responder pharmacogenomics analysis and applied this to the case of iloperidone in schizophrenia. This approach implements the actual neurophysiological effect of genotypes in a computer-based biophysically realistic model of human neuronal circuits, is parameterized with human imaging and pathology, and is calibrated by clinical data. We keep the drug pharmacology constant, but allowed the biological model coupling values to fluctuate in a restricted range around their calibrated values, thereby simulating random genetic mutations and representing variability in patient response. Using hypothesis-free Design of Experiments methods the dopamine D4 R-AMPA (receptor-alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptor coupling in cortical neurons was found to drive the beneficial effect of iloperidone, likely corresponding to the rs2513265 upstream of the GRIA4 gene identified in a traditional pharmacogenomics analysis. The serotonin 5-HT3 receptor-mediated effect on interneuron gamma-aminobutyric acid conductance was identified as the process that moderately drove the differentiation of iloperidone versus ziprasidone. This paper suggests that reverse-engineered quantitative systems pharmacology is a powerful alternative tool to characterize the underlying neurobiology of a responder population and possibly identifying new targets.

  2. A Cytomorphic Chip for Quantitative Modeling of Fundamental Bio-Molecular Circuits.

    PubMed

    2015-08-01

    We describe a 0.35 μm BiCMOS silicon chip that quantitatively models fundamental molecular circuits via efficient log-domain cytomorphic transistor equivalents. These circuits include those for biochemical binding with automatic representation of non-modular and loading behavior, e.g., in cascade and fan-out topologies; for representing variable Hill-coefficient operation and cooperative binding; for representing inducer, transcription-factor, and DNA binding; for probabilistic gene transcription with analogic representations of log-linear and saturating operation; for gain, degradation, and dynamics of mRNA and protein variables in transcription and translation; and, for faithfully representing biological noise via tunable stochastic transistor circuits. The use of on-chip DACs and ADCs enables multiple chips to interact via incoming and outgoing molecular digital data packets and thus create scalable biochemical reaction networks. The use of off-chip digital processors and on-chip digital memory enables programmable connectivity and parameter storage. We show that published static and dynamic MATLAB models of synthetic biological circuits including repressilators, feed-forward loops, and feedback oscillators are in excellent quantitative agreement with those from transistor circuits on the chip. Computationally intensive stochastic Gillespie simulations of molecular production are also rapidly reproduced by the chip and can be reliably tuned over the range of signal-to-noise ratios observed in biological cells.

  3. A 6000 year, quantitative reconstruction of precipitation variability in central Washington from lake sediment oxygen isotopes and predictive models

    NASA Astrophysics Data System (ADS)

    Steinman, B. A.; Abbott, M.; Rosenmeier, M. F.; Stansell, N.

    2010-12-01

    Oxygen isotope (δ18O) records from lake sediment have long been used to provide qualitative and/or semi-quantitative information on past hydroclimatic and glacial conditions. Strict quantitative interpretations of lake sediment δ18O records, that is, the estimation of specific values for past precipitation (P), temperature (T), and relative humidity (RH) from oxygen isotope ratios, have not yet been developed, although recent advances in the use of models designed to simulate lake hydrologic and isotopic responses to climate change have helped to define the range of climatic states capable of producing observed sediment core δ18O variations. For example, water budgets and isotopic mass balances within small, closed-basin lakes in semi-arid environments have been shown to respond most significantly over the short-term (i.e., transiently, over just a few years) to changes in P. At steady state (i.e., over multiple decades or longer) changes in T and RH (in addition to P) become important. Closed-basin lakes, however, are often subject to considerable inter-annual variability in climate (e.g., stochastic, year-to-year changes in P), and hence never truly exist in an equilibrium state. As a consequence, transient isotopic responses to stochastic (i.e., random), inter-annual climate change are typically more important (relative to steady state responses) in determining short term variability in the isotopic composition of lake sediment. Over the long term, transient responses can be averaged to produce isotopic values that are primarily a function of mean state (i.e., the long term average) climate conditions and to a lesser extent the stochastic state of climate (which, e.g., affects lake hydrologic balance through non-linear catchment runoff responses to P). Quantitative interpretation of sediment δ18O records therefore requires determination of stochastic and mean state changes in hydroclimatic variables. This necessity is complicated by the equifinality inherent

  4. A Quantitative Comparison of the Behavior of Human Ventricular Cardiac Electrophysiology Models in Tissue

    PubMed Central

    Elshrif, Mohamed M.; Cherry, Elizabeth M.

    2014-01-01

    indicating areas where existing models disagree, our findings suggest avenues for further experimental work. PMID:24416228

  5. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    NASA Astrophysics Data System (ADS)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  6. A system for quantitative morphological measurement and electronic modelling of neurons: three-dimensional reconstruction.

    PubMed

    Stockley, E W; Cole, H M; Brown, A D; Wheal, H V

    1993-04-01

    A system for accurately reconstructing neurones from optical sections taken at high magnification is described. Cells are digitised on a 68000-based microcomputer to form a database consisting of a series of linked nodes each consisting of x, y, z coordinates and an estimate of dendritic diameter. This database is used to generate three-dimensional (3-D) displays of the neurone and allows quantitative analysis of the cell volume, surface area and dendritic length. Images of the cell can be manipulated locally or transferred to an IBM 3090 mainframe where a wireframe model can be displayed on an IBM 5080 graphics terminal and rotated interactively in real time, allowing visualisation of the cell from all angles. Space-filling models can also be produced. Reconstructions can also provide morphological data for passive electrical simulations of hippocampal pyramidal cells.

  7. Quantitatively modeling soil-water distribution coefficients of three antibiotics using soil physicochemical properties.

    PubMed

    Gong, Wenwen; Liu, Xinhui; He, Hui; Wang, Liang; Dai, Guohua

    2012-10-01

    Using 14 parameters featuring soil physicochemical properties and the partial least squares (PLSs) regression method, three quantitative models were respectively developed for the soil-water distribution coefficients (logK(d)) of oxytetracycline (OTC), sulfamethazine (SMZ) and norfloxacin (NOR) in 23 Chinese natural soil samples from cultivated lands in 19 provinces of China. The cross-validated correlation coefficients (Q(cum)(2)) of three models are 0.866, 0.765 and 0.868, and the standard deviations (SDs) are 0.16, 0.21 and 0.15 respectively. The high Q(cum)(2) and low SD values indicate that three models have high robustness and precise predictability. Six parameters including pH, clay content, free Fe oxides (DCB-Fe), free Al oxides (DCB-Al), Ca content and Al content are greatly significant in the OTC model, three ones including pH, clay content and DCB-Fe are greatly significant in the SMZ model, and five ones including pH, clay content, DCB-Fe, Ca content and organic matter (OM) are greatly significant in the NOR model. The high VIP values of pH (1.17-1.24), clay content (0.81-1.10) and DCB-Fe (0.90-0.99) show that the three sorts of soil physicochemical properties play dominant roles in governing the partition balance between soil and water of three antibiotics.

  8. Reservoir architecture modeling: Nonstationary models for quantitative geological characterization. Final report, April 30, 1998

    SciTech Connect

    Kerr, D.; Epili, D.; Kelkar, M.; Redner, R.; Reynolds, A.

    1998-12-01

    The study was comprised of four investigations: facies architecture; seismic modeling and interpretation; Markov random field and Boolean models for geologic modeling of facies distribution; and estimation of geological architecture using the Bayesian/maximum entropy approach. This report discusses results from all four investigations. Investigations were performed using data from the E and F units of the Middle Frio Formation, Stratton Field, one of the major reservoir intervals in the Gulf Coast Basin.

  9. Quantitative Structure Activity Relationship Models for the Antioxidant Activity of Polysaccharides

    PubMed Central

    Nie, Kaiying; Wang, Zhaojing

    2016-01-01

    In this study, quantitative structure activity relationship (QSAR) models for the antioxidant activity of polysaccharides were developed with 50% effective concentration (EC50) as the dependent variable. To establish optimum QSAR models, multiple linear regressions (MLR), support vector machines (SVM) and artificial neural networks (ANN) were used, and 11 molecular descriptors were selected. The optimum QSAR model for predicting EC50 of DPPH-scavenging activity consisted of four major descriptors. MLR model gave EC50 = 0.033Ara-0.041GalA-0.03GlcA-0.025PC+0.484, and MLR fitted the training set with R = 0.807. ANN model gave the improvement of training set (R = 0.96, RMSE = 0.018) and test set (R = 0.933, RMSE = 0.055) which indicated that it was more accurately than SVM and MLR models for predicting the DPPH-scavenging activity of polysaccharides. 67 compounds were used for predicting EC50 of the hydroxyl radicals scavenging activity of polysaccharides. MLR model gave EC50 = 0.12PC+0.083Fuc+0.013Rha-0.02UA+0.372. A comparison of results from models indicated that ANN model (R = 0.944, RMSE = 0.119) was also the best one for predicting the hydroxyl radicals scavenging activity of polysaccharides. MLR and ANN models showed that Ara and GalA appeared critical in determining EC50 of DPPH-scavenging activity, and Fuc, Rha, uronic acid and protein content had a great effect on the hydroxyl radicals scavenging activity of polysaccharides. The antioxidant activity of polysaccharide usually was high in MW range of 4000–100000, and the antioxidant activity could be affected simultaneously by other polysaccharide properties, such as uronic acid and Ara. PMID:27685320

  10. Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models

    PubMed Central

    Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong

    2015-01-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955

  11. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    PubMed

    Woo, Hyung-June; Reifman, Jaques

    2012-08-01

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  12. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    PubMed

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case.

  13. Quantitative fully 3D PET via model-based scatter correction

    SciTech Connect

    Ollinger, J.M.

    1994-05-01

    We have investigated the quantitative accuracy of fully 3D PET using model-based scatter correction by measuring the half-life of Ga-68 in the presence of scatter from F-18. The inner chamber of a Data Spectrum cardiac phantom was filled with 18.5 MBq of Ga-68. The outer chamber was filled with an equivalent amount of F-18. The cardiac phantom was placed in a 22x30.5 cm elliptical phantom containing anthropomorphic lung inserts filled with a water-Styrofoam mixture. Ten frames of dynamic data were collected over 13.6 hours on Siemens-CTI 953B scanner with the septa retracted. The data were corrected using model-based scatter correction, which uses the emission images, transmission images and an accurate physical model to directly calculate the scatter distribution. Both uncorrected and corrected data were reconstructed using the Promis algorithm. The scatter correction required 4.3% of the total reconstruction time. The scatter fraction in a small volume of interest in the center of the inner chamber of the cardiac insert rose from 4.0% in the first interval to 46.4% in the last interval as the ratio of F-18 activity to Ga-68 activity rose from 1:1 to 33:1. Fitting a single exponential to the last three data points yields estimates of the half-life of Ga-68 of 77.01 minutes and 68.79 minutes for uncorrected and corrected data respectively. Thus, scatter correction reduces the error from 13.3% to 1.2%. This suggests that model-based scatter correction is accurate in the heterogeneous attenuating medium found in the chest, making possible quantitative, fully 3D PET in the body.

  14. Multi-epitope Models Explain How Pre-existing Antibodies Affect the Generation of Broadly Protective Responses to Influenza

    PubMed Central

    Zarnitsyna, Veronika I.; Lavine, Jennie; Ellebedy, Ali; Ahmed, Rafi; Antia, Rustom

    2016-01-01

    The development of next-generation influenza vaccines that elicit strain-transcendent immunity against both seasonal and pandemic viruses is a key public health goal. Targeting the evolutionarily conserved epitopes on the stem of influenza’s major surface molecule, hemagglutinin, is an appealing prospect, and novel vaccine formulations show promising results in animal model systems. However, studies in humans indicate that natural infection and vaccination result in limited boosting of antibodies to the stem of HA, and the level of stem-specific antibody elicited is insufficient to provide broad strain-transcendent immunity. Here, we use mathematical models of the humoral immune response to explore how pre-existing immunity affects the ability of vaccines to boost antibodies to the head and stem of HA in humans, and, in particular, how it leads to the apparent lack of boosting of broadly cross-reactive antibodies to the stem epitopes. We consider hypotheses where binding of antibody to an epitope: (i) results in more rapid clearance of the antigen; (ii) leads to the formation of antigen-antibody complexes which inhibit B cell activation through Fcγ receptor-mediated mechanism; and (iii) masks the epitope and prevents the stimulation and proliferation of specific B cells. We find that only epitope masking but not the former two mechanisms to be key in recapitulating patterns in data. We discuss the ramifications of our findings for the development of vaccines against both seasonal and pandemic influenza. PMID:27336297

  15. Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.

    PubMed

    Pang, Wei; Coghill, George M

    2015-05-01

    In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed.

  16. A Second-Generation Device for Automated Training and Quantitative Behavior Analyses of Molecularly-Tractable Model Organisms

    PubMed Central

    Blackiston, Douglas; Shomrat, Tal; Nicolas, Cindy L.; Granata, Christopher; Levin, Michael

    2010-01-01

    A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays). The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science. PMID:21179424

  17. Evaluation of New Zealand's high-seas bottom trawl closures using predictive habitat models and quantitative risk assessment.

    PubMed

    Penney, Andrew J; Guinotte, John M

    2013-01-01

    United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas. PMID:24358162

  18. Evaluation of New Zealand’s High-Seas Bottom Trawl Closures Using Predictive Habitat Models and Quantitative Risk Assessment

    PubMed Central

    Penney, Andrew J.; Guinotte, John M.

    2013-01-01

    United Nations General Assembly Resolution 61/105 on sustainable fisheries (UNGA 2007) establishes three difficult questions for participants in high-seas bottom fisheries to answer: 1) Where are vulnerable marine systems (VMEs) likely to occur?; 2) What is the likelihood of fisheries interaction with these VMEs?; and 3) What might qualify as adequate conservation and management measures to prevent significant adverse impacts? This paper develops an approach to answering these questions for bottom trawling activities in the Convention Area of the South Pacific Regional Fisheries Management Organisation (SPRFMO) within a quantitative risk assessment and cost : benefit analysis framework. The predicted distribution of deep-sea corals from habitat suitability models is used to answer the first question. Distribution of historical bottom trawl effort is used to answer the second, with estimates of seabed areas swept by bottom trawlers being used to develop discounting factors for reduced biodiversity in previously fished areas. These are used in a quantitative ecological risk assessment approach to guide spatial protection planning to address the third question. The coral VME likelihood (average, discounted, predicted coral habitat suitability) of existing spatial closures implemented by New Zealand within the SPRFMO area is evaluated. Historical catch is used as a measure of cost to industry in a cost : benefit analysis of alternative spatial closure scenarios. Results indicate that current closures within the New Zealand SPRFMO area bottom trawl footprint are suboptimal for protection of VMEs. Examples of alternative trawl closure scenarios are provided to illustrate how the approach could be used to optimise protection of VMEs under chosen management objectives, balancing protection of VMEs against economic loss to commercial fishers from closure of historically fished areas. PMID:24358162

  19. Quantitative structure-property relationship modeling of remote liposome loading of drugs.

    PubMed

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-06-10

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a data set including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and 5-fold external validation. The external prediction accuracy for binary models was as high as 91-96%; for continuous models the mean coefficient R(2) for regression between predicted versus observed values was 0.76-0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments.

  20. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  1. Quantitative analyses and modelling to support achievement of the 2020 goals for nine neglected tropical diseases.

    PubMed

    Hollingsworth, T Déirdre; Adams, Emily R; Anderson, Roy M; Atkins, Katherine; Bartsch, Sarah; Basáñez, María-Gloria; Behrend, Matthew; Blok, David J; Chapman, Lloyd A C; Coffeng, Luc; Courtenay, Orin; Crump, Ron E; de Vlas, Sake J; Dobson, Andy; Dyson, Louise; Farkas, Hajnal; Galvani, Alison P; Gambhir, Manoj; Gurarie, David; Irvine, Michael A; Jervis, Sarah; Keeling, Matt J; Kelly-Hope, Louise; King, Charles; Lee, Bruce Y; Le Rutte, Epke A; Lietman, Thomas M; Ndeffo-Mbah, Martial; Medley, Graham F; Michael, Edwin; Pandey, Abhishek; Peterson, Jennifer K; Pinsent, Amy; Porco, Travis C; Richardus, Jan Hendrik; Reimer, Lisa; Rock, Kat S; Singh, Brajendra K; Stolk, Wilma; Swaminathan, Subramanian; Torr, Steve J; Townsend, Jeffrey; Truscott, James; Walker, Martin; Zoueva, Alexandra

    2015-01-01

    Quantitative analysis and mathematical models are useful tools in informing strategies to control or eliminate disease. Currently, there is an urgent need to develop these tools to inform policy to achieve the 2020 goals for neglected tropical diseases (NTDs). In this paper we give an overview of a collection of novel model-based analyses which aim to address key questions on the dynamics of transmission and control of nine NTDs: Chagas disease, visceral leishmaniasis, human African trypanosomiasis, leprosy, soil-transmitted helminths, schistosomiasis, lymphatic filariasis, onchocerciasis and trachoma. Several common themes resonate throughout these analyses, including: the importance of epidemiological setting on the success of interventions; targeting groups who are at highest risk of infection or re-infection; and reaching populations who are not accessing interventions and may act as a reservoir for infection,. The results also highlight the challenge of maintaining elimination 'as a public health problem' when true elimination is not reached. The models elucidate the factors that may be contributing most to persistence of disease and discuss the requirements for eventually achieving true elimination, if that is possible. Overall this collection presents new analyses to inform current control initiatives. These papers form a base from which further development of the models and more rigorous validation against a variety of datasets can help to give more detailed advice. At the moment, the models' predictions are being considered as the world prepares for a final push towards control or elimination of neglected tropical diseases by 2020. PMID:26652272

  2. Combining quantitative trait loci analysis with physiological models to predict genotype-specific transpiration rates.

    PubMed

    Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K

    2015-04-01

    Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation.

  3. Quantitative modeling of bioconcentration factors of carbonyl herbicides using multivariate image analysis.

    PubMed

    Freitas, Mirlaine R; Barigye, Stephen J; Daré, Joyce K; Freitas, Matheus P

    2016-06-01

    The bioconcentration factor (BCF) is an important parameter used to estimate the propensity of chemicals to accumulate in aquatic organisms from the ambient environment. While simple regressions for estimating the BCF of chemical compounds from water solubility or the n-octanol/water partition coefficient have been proposed in the literature, these models do not always yield good correlations and more descriptive variables are required for better modeling of BCF data for a given series of organic pollutants, such as some herbicides. Thus, the logBCF values for a set of carbonyl herbicides comprising amide, urea, carbamate and thiocarbamate groups were quantitatively modeled using multivariate image analysis (MIA) descriptors, derived from colored image representations for chemical structures. The logBCF model was calibrated and vigorously validated (r(2) = 0.79, q(2) = 0.70 and rtest(2) = 0.81), providing a comprehensive three-parameter linear equation after variable selection (logBCF = 5.682 - 0.00233 × X9774 - 0.00070 × X813 - 0.00273 × X5144); the variables represent pixel coordinates in the multivariate image. Finally, chemical interpretation of the obtained models in terms of the structural characteristics responsible for the enhanced or reduced logBCF values was performed, providing key leads in the prospective development of more eco-friendly synthetic herbicides. PMID:26971171

  4. Quantitative Structure – Property Relationship Modeling of Remote Liposome Loading Of Drugs

    PubMed Central

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-01-01

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a dataset including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and five-fold external validation. The external prediction accuracy for binary models was as high as 91–96%; for continuous models the mean coefficient R2 for regression between predicted versus observed values was 0.76–0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. PMID:22154932

  5. Antiproliferative Pt(IV) complexes: synthesis, biological activity, and quantitative structure-activity relationship modeling.

    PubMed

    Gramatica, Paola; Papa, Ester; Luini, Mara; Monti, Elena; Gariboldi, Marzia B; Ravera, Mauro; Gabano, Elisabetta; Gaviglio, Luca; Osella, Domenico

    2010-09-01

    Several Pt(IV) complexes of the general formula [Pt(L)2(L')2(L'')2] [axial ligands L are Cl-, RCOO-, or OH-; equatorial ligands L' are two am(m)ine or one diamine; and equatorial ligands L'' are Cl- or glycolato] were rationally designed and synthesized in the attempt to develop a predictive quantitative structure-activity relationship (QSAR) model. Numerous theoretical molecular descriptors were used alongside physicochemical data (i.e., reduction peak potential, Ep, and partition coefficient, log Po/w) to obtain a validated QSAR between in vitro cytotoxicity (half maximal inhibitory concentrations, IC50, on A2780 ovarian and HCT116 colon carcinoma cell lines) and some features of Pt(IV) complexes. In the resulting best models, a lipophilic descriptor (log Po/w or the number of secondary sp3 carbon atoms) plus an electronic descriptor (Ep, the number of oxygen atoms, or the topological polar surface area expressed as the N,O polar contribution) is necessary for modeling, supporting the general finding that the biological behavior of Pt(IV) complexes can be rationalized on the basis of their cellular uptake, the Pt(IV)-->Pt(II) reduction, and the structure of the corresponding Pt(II) metabolites. Novel compounds were synthesized on the basis of their predicted cytotoxicity in the preliminary QSAR model, and were experimentally tested. A final QSAR model, based solely on theoretical molecular descriptors to ensure its general applicability, is proposed.

  6. Quantitative structure-property relationship modeling of remote liposome loading of drugs.

    PubMed

    Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2012-06-10

    Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a data set including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and 5-fold external validation. The external prediction accuracy for binary models was as high as 91-96%; for continuous models the mean coefficient R(2) for regression between predicted versus observed values was 0.76-0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. PMID:22154932

  7. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015.

    PubMed

    Sobkowicz, Pawel

    2016-01-01

    We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O) approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions)-which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be 'invaded' by a newcomer third party very quickly, while the second remains immune to such invasion.

  8. Quantitative modeling of the Saccharomyces cerevisiae FLR1 regulatory network using an S-system formalism.

    PubMed

    Calçada, Dulce; Vinga, Susana; Freitas, Ana T; Oliveira, Arlindo L

    2011-10-01

    In this study we address the problem of finding a quantitative mathematical model for the genetic network regulating the stress response of the yeast Saccharomyces cerevisiae to the agricultural fungicide mancozeb. An S-system formalism was used to model the interactions of a five-gene network encoding four transcription factors (Yap1, Yrr1, Rpn4 and Pdr3) regulating the transcriptional activation of the FLR1 gene. Parameter estimation was accomplished by decoupling the resulting system of nonlinear ordinary differential equations into a larger nonlinear algebraic system, and using the Levenberg-Marquardt algorithm to fit the models predictions to experimental data. The introduction of constraints in the model, related to the putative topology of the network, was explored. The results show that forcing the network connectivity to adhere to this topology did not lead to better results than the ones obtained using an unrestricted network topology. Overall, the modeling approach obtained partial success when trained on the nonmutant datasets, although further work is required if one wishes to obtain more accurate prediction of the time courses. PMID:21976379

  9. Quantitative Agent Based Model of Opinion Dynamics: Polish Elections of 2015

    PubMed Central

    Sobkowicz, Pawel

    2016-01-01

    We present results of an abstract, agent based model of opinion dynamics simulations based on the emotion/information/opinion (E/I/O) approach, applied to a strongly polarized society, corresponding to the Polish political scene between 2005 and 2015. Under certain conditions the model leads to metastable coexistence of two subcommunities of comparable size (supporting the corresponding opinions)—which corresponds to the bipartisan split found in Poland. Spurred by the recent breakdown of this political duopoly, which occurred in 2015, we present a model extension that describes both the long term coexistence of the two opposing opinions and a rapid, transitory change due to the appearance of a third party alternative. We provide quantitative comparison of the model with the results of polls and elections in Poland, testing the assumptions related to the modeled processes and the parameters used in the simulations. It is shown, that when the propaganda messages of the two incumbent parties differ in emotional tone, the political status quo may be unstable. The asymmetry of the emotions within the support bases of the two parties allows one of them to be ‘invaded’ by a newcomer third party very quickly, while the second remains immune to such invasion. PMID:27171226

  10. Quantitative saltwater modeling for validation of sub-grid scale LES turbulent mixing and transport models for fire

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro; Marshall, Andre; Gollner, Michael

    2015-11-01

    A quantitative understanding of turbulent mixing and transport in buoyant flows is indispensable for accurate modeling of combustion, fire dynamics and smoke transport used in both fire safety design and investigation. This study describes the turbulent mixing behavior of scaled, unconfined plumes using a quantitative saltwater modeling technique. An analysis of density difference turbulent fluctuations, captured as the collected images scale down in resolution, allows for the determination of the largest dimension over which LES averaging should be performed. This is important as LES models must assume a distribution for sub-grid scale mixing, such as the ?-PDF distribution. We showed that there is a loss of fidelity in resolving the flow for a cell size above 0 . 54D* ; where D* is a characteristic length scale for the plume. Such a point represents the threshold above which the fluctuations start to monotonically grow. Turbulence statistics were also analyzed in terms of span-wise intermittency and time and space correlation coefficients. An unexpected condition for the core of the plume, where a substantial amount of ambient fluid (fresh water) is found, and the mixing process under buoyant conditions were found depending on the resolution of measurements used.

  11. Effect of platform, reference material, and quantification model on enumeration of Enterococcus by quantitative PCR methods.

    PubMed

    Cao, Yiping; Sivaganesan, Mano; Kinzelman, Julie; Blackwood, A Denene; Noble, Rachel T; Haugland, Richard A; Griffith, John F; Weisberg, Stephen B

    2013-01-01

    Quantitative polymerase chain reaction (qPCR) is increasingly being used for the quantitative detection of fecal indicator bacteria in beach water. QPCR allows for same-day health warnings, and its application is being considered as an option for recreational water quality testing in the United States (USEPA, 2011. EPA-OW-2011-0466, FRL-9609-3, Notice of Availability of Draft Recreational Water Quality Criteria and Request for Scientific Views). However, transition of qPCR from a research tool to routine water quality testing requires information on how various method variations affect target enumeration. Here we compared qPCR performance and enumeration of enterococci in spiked and environmental water samples using three qPCR platforms (Applied Biosystem StepOnePlus™, the BioRad iQ™5 and the Cepheid SmartCycler(®) II), two reference materials (lyophilized cells and frozen cells on filters) and two comparative CT quantification models (ΔCT and ΔΔCT). Reference materials exerted the biggest influence, consistently affecting results by approximately 0.5 log(10) unit. Platform had the smallest effect, generally exerting <0.1 log(10) unit difference in final results. Quantification model led to small differences (0.04-0.2 log(10) unit) in this study with relatively uninhibited samples, but has the potential to cause as much as 8-fold (0.9 log(10) unit) difference in potentially inhibitory samples. Our findings indicate the need for a certified and centralized source of reference materials and additional studies to assess applicability of the quantification models in analyses of PCR inhibitory samples.

  12. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    NASA Technical Reports Server (NTRS)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  13. Objective and quantitative evaluation of motor function in a monkey model of Parkinson's disease.

    PubMed

    Saiki, Hidemoto; Hayashi, Takuya; Takahashi, Ryosuke; Takahashi, Jun

    2010-07-15

    Monkeys treated with 1-methyl-4-phenyl-1,2,5,6-tetrahydropyridine (MPTP) are currently the best animal model for Parkinson's disease (PD) and have been widely used for physiological and pharmacological investigations. However, objective and quantitative assessments have not been established for grading their motor behaviors. In order to develop a method for an unbiased evaluation, we performed a video-based assessment, used qualitative rating scales, and carried out an in vivo investigation of dopamine (DA) transporter binding in systemically MPTP-treated monkeys. The video-based analysis of spontaneous movement clearly demonstrated a significant correlation with the qualitative rating score. The assessment of DA transporter (DAT) function by [(11)C]-CFT-PET showed that, when compared with normal animals, the MPTP-treated animals exhibited decreased CFT binding in the bilateral striatum, particularly in the dorsal part in the putamen and caudate. Among the MPTP-treated monkeys, an unbiased PET analysis revealed a significant correlation between CFT binding in the midbrain and qualitative rating scores or the amount of spontaneous movements. These results indicate that a video-based analysis can be a reliable tool for an objective and quantitative evaluation of motor dysfunction of MPTP-treated monkeys, and furthermore, that DAT function in the midbrain may also be important for the evaluation.

  14. Climate change and dengue: a critical and systematic review of quantitative modelling approaches

    PubMed Central

    2014-01-01

    Background Many studies have found associations between climatic conditions and dengue transmission. However, there is a debate about the future impacts of climate change on dengue transmission. This paper reviewed epidemiological evidence on the relationship between climate and dengue with a focus on quantitative methods for assessing the potential impacts of climate change on global dengue transmission. Methods A literature search was conducted in October 2012, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search focused on peer-reviewed journal articles published in English from January 1991 through October 2012. Results Sixteen studies met the inclusion criteria and most studies showed that the transmission of dengue is highly sensitive to climatic conditions, especially temperature, rainfall and relative humidity. Studies on the potential impacts of climate change on dengue indicate increased climatic suitability for transmission and an expansion of the geographic regions at risk during this century. A variety of quantitative modelling approaches were used in the studies. Several key methodological issues and current knowledge gaps were identified through this review. Conclusions It is important to assemble spatio-temporal patterns of dengue transmission compatible with long-term data on climate and other socio-ecological changes and this would advance projections of dengue risks associated with climate change. PMID:24669859

  15. Quantitative models of sediment generation and provenance: State of the art and future developments

    NASA Astrophysics Data System (ADS)

    Weltje, Gert Jan

    2012-12-01

    An overview of quantitative approaches to analysis and modelling of sediment generation and provenance is presented, with an emphasis on major framework components as determined by means of petrographic techniques. Conceptual models of sediment provenance are shown to be consistent with two classes of numerical-statistical models, i.e. linear mixing models and compositional linear models. These cannot be placed within a common mathematical framework, because the former requires that sediment composition is expressed in terms of proportions, whereas the latter requires that sediment composition is expressed in terms of log-ratios of proportions. Additivity of proportions, a fundamental assumption in linear mixing models, cannot be readily expressed in log-ratio terms. Linear mixing models may be used to describe compositional variability in terms of physical and conceptual (un)mixing. Models of physical (un)mixing are appropriate for describing compositional variation within transport-invariant subpopulations of grains as a result of varying rates of supply of detritus from multiple sources. Conceptual (un)mixing governs the relations among chemical, mineralogical and petrographic characteristics of sediments, which represent different descriptive levels within a compositional hierarchy. Compositional linear process models may be used to describe compositional and/or textural evolution resulting from selective modifications induced by sediment transport, as well as chemical and mechanical weathering. Current approaches to modelling of surface processes treat the coupled evolution of source areas and sedimentary basins in terms of bulk mass transfer only, and do not take into account compositional and textural sediment properties. Moving from the inverse modelling approach embodied in provenance research to process-based forward models of sediment generation which provide detailed predictions of sediment properties meets with considerable (albeit not insurmountable

  16. Comparison of Mount Etna, Kilauea, and Piton de la Fournaise by a quantitative modeling of their eruption histories

    NASA Astrophysics Data System (ADS)

    Aki, Keiiti; Ferrazzini, ValéRie

    2001-01-01

    From published data we found characteristic relations between the amount V of erupted lava and the duration d of eruption for Mount Etna, Kilauea, and Piton de la Fournaise. The relation is similar between Mount Etna and Kilauea, where the increase of V with increasing d is slow, showing a trend of a lower flow rate for a larger eruption. For Piton de la Fournaise, however, the trend is distinctly different, showing a higher flow rate for a larger eruption. We constructed quantitative models of a magma system with reservoirs at various levels and tested hypotheses about the existence of large reservoirs under these volcanoes using the observed V-d relations. We found that the observed V-d relation is consistent with the presence of a large reservoir at a shallow depth under Kilauea, and with the presence of a large reservoir near the bottom of the volcanic edifice under Piton de la Fournaise. The above models for Kilauea and Piton de la Fournaise are characterized by a simple path from the mantle reservoir which leads to the shallowest reservoir connected to a hierarchy of channels with varying resistance to eruption sites. We obtained less satisfactory agreement between the observed V-d relation and the predicted using our model for Mount Etna. Time histories of pressures in the reservoirs at various levels obtained by the modeling explain why inflation-deflation cycles observed at Kilauea have not been reported for Piton de la Fournaise. The absence of volcano-tectonic earthquakes with magnitude greater than ˜2.5 under Piton de la Fournaise is attributed to the simplicity of the magma path from the mantle to the shallowest reservoir and the underdeveloped rift zone, which result in a stress concentration localized only beneath the summit area.

  17. Quantitative structure-activity relationship models of chemical transformations from matched pairs analyses.

    PubMed

    Beck, Jeremy M; Springer, Clayton

    2014-04-28

    The concepts of activity cliffs and matched molecular pairs (MMP) are recent paradigms for analysis of data sets to identify structural changes that may be used to modify the potency of lead molecules in drug discovery projects. Analysis of MMPs was recently demonstrated as a feasible technique for quantitative structure-activity relationship (QSAR) modeling of prospective compounds. Although within a small data set, the lack of matched pairs, and the lack of knowledge about specific chemical transformations limit prospective applications. Here we present an alternative technique that determines pairwise descriptors for each matched pair and then uses a QSAR model to estimate the activity change associated with a chemical transformation. The descriptors effectively group similar transformations and incorporate information about the transformation and its local environment. Use of a transformation QSAR model allows one to estimate the activity change for novel transformations and therefore returns predictions for a larger fraction of test set compounds. Application of the proposed methodology to four public data sets results in increased model performance over a benchmark random forest and direct application of chemical transformations using QSAR-by-matched molecular pairs analysis (QSAR-by-MMPA).

  18. [Quantitative risk model for verocytotoxigenic Escherichia coli cross-contamination during homemade hamburger preparation].

    PubMed

    Signorini, M L; Frizzo, L S

    2009-01-01

    The objective of this study was to develop a quantitative risk model for verocytotoxigenic Escherichia coil (VTEC) cross-contamination during hamburger preparation at home. Published scientific information about the disease was considered for the elaboration of the model, which included a number of routines performed during food preparation in kitchens. The associated probabilities of bacterial transference between food items and kitchen utensils which best described each stage of the process were incorporated into the model by using @Risk software. Handling raw meat before preparing ready-to-eat foods (Odds ratio, OR, 6.57), as well as hand (OR = 12.02) and cutting board (OR = 5.02) washing habits were the major risk factors of VTEC cross-contamination from meat to vegetables. The information provided by this model should be considered when designing public information campaigns on hemolytic uremic syndrome risk directed to food handlers, in order to stress the importance of the above mentioned factors in disease transmission.

  19. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  20. A Suite of Models to Support the Quantitative Assessment of Spread in Pest Risk Analysis

    PubMed Central

    Robinet, Christelle; Kehlenbeck, Hella; Kriticos, Darren J.; Baker, Richard H. A.; Battisti, Andrea; Brunel, Sarah; Dupin, Maxime; Eyre, Dominic; Faccoli, Massimo; Ilieva, Zhenya; Kenis, Marc; Knight, Jon; Reynaud, Philippe; Yart, Annie; van der Werf, Wopke

    2012-01-01

    Pest Risk Analyses (PRAs) are conducted worldwide to decide whether and how exotic plant pests should be regulated to prevent invasion. There is an increasing demand for science-based risk mapping in PRA. Spread plays a key role in determining the potential distribution of pests, but there is no suitable spread modelling tool available for pest risk analysts. Existing models are species specific, biologically and technically complex, and data hungry. Here we present a set of four simple and generic spread models that can be parameterised with limited data. Simulations with these models generate maps of the potential expansion of an invasive species at continental scale. The models have one to three biological parameters. They differ in whether they treat spatial processes implicitly or explicitly, and in whether they consider pest density or pest presence/absence only. The four models represent four complementary perspectives on the process of invasion and, because they have different initial conditions, they can be considered as alternative scenarios. All models take into account habitat distribution and climate. We present an application of each of the four models to the western corn rootworm, Diabrotica virgifera virgifera, using historic data on its spread in Europe. Further tests as proof of concept were conducted with a broad range of taxa (insects, nematodes, plants, and plant pathogens). Pest risk analysts, the intended model users, found the model outputs to be generally credible and useful. The estimation of parameters from data requires insights into population dynamics theory, and this requires guidance. If used appropriately, these generic spread models provide a transparent and objective tool for evaluating the potential spread of pests in PRAs. Further work is needed to validate models, build familiarity in the user community and create a database of species parameters to help realize their potential in PRA practice. PMID:23056174

  1. Modeling elastic waves in coupled media: Estimate of soft tissue influence and application to quantitative ultrasound.

    PubMed

    Chen, Jiangang; Cheng, Li; Su, Zhongqing; Qin, Ling

    2013-02-01

    The effect of medium coupling on propagation of elastic waves is a general concern in a variety of engineering and bio-medical applications. Although some theories and analytical models are available for describing waves in multi-layered engineering structures, they do not focus on canvassing ultrasonic waves in human bones with coupled soft tissues, where the considerable differences in acoustic impedance between bone and soft tissue may pose a challenge in using these models (the soft tissues having an acoustic impedance around 80% less than that of a typical bone). Without proper treatment of this coupling effect, the precision of quantitative ultrasound (QUS) for clinical bone assessment can be compromised. The coupling effect of mimicked soft tissues on the first-arriving signal (FAS) and second-arriving signal (SAS) in a series of synthesized soft-tissue-bone phantoms was investigated experimentally and calibrated quantitatively. Understanding of the underlying mechanism of the coupling effect was supplemented by a dedicated finite element analysis. As revealed, the medium coupling impacts influence on different wave modes to different degrees: for FAS and SAS, the most significant changes take place when the soft tissues are initially introduced, and the decrease in signal peak energy continues with increase in the thickness or elastic modulus of the soft tissues, but the changes in propagation velocity fluctuate within 5% regardless of further increase in the thickness or elastic modulus of the soft tissues. As an application, the calibrated effects were employed to enhance the precision of SAS-based QUS when used for predicting the simulated healing status of a mimicked bone fracture, to find prediction of healing progress of bone fracture based on changes in velocity of the FAS or the SAS is inaccurate without taking into account the effect of soft tissue coupling, entailing appropriate compensation for the coupling effect.

  2. A quantitative trait locus for variation in dopamine metabolism mapped in a primate model using reference sequences from related species

    PubMed Central

    Freimer, Nelson B.; Service, Susan K.; Ophoff, Roel A.; Jasinska, Anna J.; McKee, Kevin; Villeneuve, Amelie; Belisle, Alexandre; Bailey, Julia N.; Breidenthal, Sherry E.; Jorgensen, Matthew J.; Mann, J. John; Cantor, Rita M.; Dewar, Ken; Fairbanks, Lynn A.

    2007-01-01

    Non-human primates (NHP) provide crucial research models. Their strong similarities to humans make them particularly valuable for understanding complex behavioral traits and brain structure and function. We report here the genetic mapping of an NHP nervous system biologic trait, the cerebrospinal fluid (CSF) concentration of the dopamine metabolite homovanillic acid (HVA), in an extended inbred vervet monkey (Chlorocebus aethiops sabaeus) pedigree. CSF HVA is an index of CNS dopamine activity, which is hypothesized to contribute substantially to behavioral variations in NHP and humans. For quantitative trait locus (QTL) mapping, we carried out a two-stage procedure. We first scanned the genome using a first-generation genetic map of short tandem repeat markers. Subsequently, using >100 SNPs within the most promising region identified by the genome scan, we mapped a QTL for CSF HVA at a genome-wide level of significance (peak logarithm of odds score >4) to a narrow well delineated interval (<10 Mb). The SNP discovery exploited conserved segments between human and rhesus macaque reference genome sequences. Our findings demonstrate the potential of using existing primate reference genome sequences for designing high-resolution genetic analyses applicable across a wide range of NHP species, including the many for which full genome sequences are not yet available. Leveraging genomic information from sequenced to nonsequenced species should enable the utilization of the full range of NHP diversity in behavior and disease susceptibility to determine the genetic basis of specific biological and behavioral traits. PMID:17884980

  3. Can nuclear magnetic resonance provide useful microscale data for quantitative testing of reactive transport models? (Invited)

    NASA Astrophysics Data System (ADS)

    Seymour, J. D.; Codd, S. L.

    2010-12-01

    In posing the query in our title we are inherently answering the query posed by the conveners, ‘Is microscale information needed in reactive transport models?’, in the affirmative. Our perspective on this is derived in large part from the NMR measurements of displacement time and length scale dependent molecular dynamics through a range of porous media with very different microscale structures which will be presented. The measured dynamics of hydrodynamic dispersion in model monodisperse beadpacks subjected to bioreactions which form biofilms [1] and precipitate [2] and colloidal deposition, as well as an open cell solid polymer foam [3], show strong dependence on microscale structure. As an example of what is required for direct quantitative comparison of experimental data, analytical theory [4] and simulation the foam structure provides a system with an ill defined microscale length scale and a mean velocity field that is highly heterogeneous[3,5]. The NMR data directly provides a dynamic length scale [6]. The agreement found between the non equilibrium statistical mechanics model of preasymptotic hydrodynamic dispersion [4], which inputs the microstructure into the form of the velocity autocorrelation function, and the data [3] argues for the role of microscale structure in transport. Recently developed 2D magnetic relaxation time and diffusion correlation and exchange experiments are shown to provide unique new data on the pore structure and surface properties in model and rock porous systems subjected to bioreaction and supercritical fluid challenges. Coupling of these measurements and hydrodynamic dispersion measurements can provide the relation between microscale structure and transport dynamics. The direct comparison of theory and experiment is required to test and advance predictive reactive transport models and the difficulties and needs to enhance direct quantitative comparisons with NMR data will be addressed, along with the significant limitations

  4. Final Report for Dynamic Models for Causal Analysis of Panel Data. Models for Change in Quantitative Variables, Part II Scholastic Models. Part II, Chapter 4.

    ERIC Educational Resources Information Center

    Hannan, Michael T.

    This document is part of a series of chapters described in SO 011 759. Stochastic models for the sociological analysis of change and the change process in quantitative variables are presented. The author lays groundwork for the statistical treatment of simple stochastic differential equations (SDEs) and discusses some of the continuities of…

  5. Quantitative Comparison of a New Ab Initio Micrometeor Ablation Model with an Observationally Verifiable Standard Model

    NASA Astrophysics Data System (ADS)

    Meisel, David D.; Szasz, Csilla; Kero, Johan

    2008-06-01

    The Arecibo UHF radar is able to detect the head-echos of micron-sized meteoroids up to velocities of 75 km/s over a height range of 80 140 km. Because of their small size there are many uncertainties involved in calculating their above atmosphere properties as needed for orbit determination. An ab initio model of meteor ablation has been devised that should work over the mass range 10-16 kg to 10-7 kg, but the faint end of this range cannot be observed by any other method and so direct verification is not possible. On the other hand, the EISCAT UHF radar system detects micrometeors in the high mass part of this range and its observations can be fit to a “standard” ablation model and calibrated to optical observations (Szasz et al. 2007). In this paper, we present a preliminary comparison of the two models, one observationally confirmable. Among the features of the ab initio model that are different from the “standard” model are: (1) uses the experimentally based low pressure vaporization theory of O’Hanlon (A users’s guide to vacuum technology, 2003) for ablation, (2) uses velocity dependent functions fit from experimental data on heat transfer, luminosity and ionization efficiencies measured by Friichtenicht and Becker (NASA Special Publication 319: 53, 1973) for micron sized particles, (3) assumes a density and temperature dependence of the micrometeoroids and ablation product specific heats, (4) assumes a density and size dependent value for the thermal emissivity and (5) uses a unified synthesis of experimental data for the most important meteoroid elements and their oxides through least square fits (as functions of temperature, density, and/or melting point) of the tables of thermodynamic parameters given in Weast (CRC Handbook of Physics and Chemistry, 1984), Gray (American Institute of Physics Handbook, 1972), and Cox (Allen’s Astrophysical Quantities 2000). This utilization of mostly experimentally determined data is the main reason for

  6. Quantitative model of calcium/calmodulin-dependent protein kinase II activation

    NASA Astrophysics Data System (ADS)

    Mihalas, Stefan

    Calcium/calmodulin-dependent protein kinase II (CaMKII) is a key element in the calcium second messenger cascades that lead to long term potentiation (LTP) of synaptic strength. In this thesis, I have constructed kinetic models of activation of CaMKII and measured some of the unknown parameters of the model. I used the models to elucidate mechanisms of activation of CaMKII and to study the kinetics of its activation under conditions similar to those in dendritic spines.In chapter 2, I developed a new experimental method to rapidly stop the autophosphorylation reaction. I used this method to measure the catalytic turnover number of CaMKII. To quantitatively characterize CaMKII atophosphorylation in nonsaturating calcium, I also measured the autophosphorylation turnover number when CaMKII is activated by calmodulin mutants that can bind calcium ions only in either the amino or the carboxyl lobes.Previous models of CaMKII activation assumed that binding of calmodulins to individual CaMKII subunits is independent and that autophosphorylation occurs within a ring of 6 subunits. However, a recent structure of CaMKII suggests that pairs of subunits cooperate in binding calmodulin and raises the possibility that the autophosphorylation occurs within pairs of subunits. In chapter 3, I constructed a model in which CaMKII subunits cooperate in binding calmodulin. This model reconciled previous experimental results from the literature that appeared contradictory. In chapter 4, I constructed two models for CaMKII autophosphorylation, in which autophosphorylation can occur either in rings or pairs, and used them to design experiments aimed at differentiating between these possibilities. Previously published measurements and the measurements that I performed are more consistent with autophosphorylation occurring within pairs.In chapter 5, I constructed a model for simultaneous interactions among calcium, calmodulin, and CaMKII, and I used an automatic parameter search algorithm

  7. Validation of Quantitative Structure-Activity Relationship (QSAR) Model for Photosensitizer Activity Prediction

    PubMed Central

    Frimayanti, Neni; Yam, Mun Li; Lee, Hong Boon; Othman, Rozana; Zain, Sharifuddin M.; Rahman, Noorsaadah Abd.

    2011-01-01

    Photodynamic therapy is a relatively new treatment method for cancer which utilizes a combination of oxygen, a photosensitizer and light to generate reactive singlet oxygen that eradicates tumors via direct cell-killing, vasculature damage and engagement of the immune system. Most of photosensitizers that are in clinical and pre-clinical assessments, or those that are already approved for clinical use, are mainly based on cyclic tetrapyrroles. In an attempt to discover new effective photosensitizers, we report the use of the quantitative structure-activity relationship (QSAR) method to develop a model that could correlate the structural features of cyclic tetrapyrrole-based compounds with their photodynamic therapy (PDT) activity. In this study, a set of 36 porphyrin derivatives was used in the model development where 24 of these compounds were in the training set and the remaining 12 compounds were in the test set. The development of the QSAR model involved the use of the multiple linear regression analysis (MLRA) method. Based on the method, r2 value, r2 (CV) value and r2 prediction value of 0.87, 0.71 and 0.70 were obtained. The QSAR model was also employed to predict the experimental compounds in an external test set. This external test set comprises 20 porphyrin-based compounds with experimental IC50 values ranging from 0.39 μM to 7.04 μM. Thus the model showed good correlative and predictive ability, with a predictive correlation coefficient (r2 prediction for external test set) of 0.52. The developed QSAR model was used to discover some compounds as new lead photosensitizers from this external test set. PMID:22272096

  8. 40 CFR Table 1 to Subpart Mmmm of... - Model Rule-Increments of Progress and Compliance Schedules for Existing Sewage Sludge...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Compliance Schedules for Existing Sewage Sludge Incineration Units 1 Table 1 to Subpart MMMM of Part 60... Incineration Units Pt. 60, Subpt. MMMM, Table 1 Table 1 to Subpart MMMM of Part 60—Model Rule—Increments of Progress and Compliance Schedules for Existing Sewage Sludge Incineration Units Comply with...

  9. 40 CFR Table 1 to Subpart Mmmm of... - Model Rule-Increments of Progress and Compliance Schedules for Existing Sewage Sludge...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Compliance Schedules for Existing Sewage Sludge Incineration Units 1 Table 1 to Subpart MMMM of Part 60... Incineration Units Pt. 60, Subpt. MMMM, Table 1 Table 1 to Subpart MMMM of Part 60—Model Rule—Increments of Progress and Compliance Schedules for Existing Sewage Sludge Incineration Units Comply with...

  10. 40 CFR Table 6 to Subpart Mmmm of... - Model Rule-Summary of Reporting Requirements for Existing Sewage Sludge Incineration Units a

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Requirements for Existing Sewage Sludge Incineration Units a 6 Table 6 to Subpart MMMM of Part 60 Protection of... Incineration Units Pt. 60, Subpt. MMMM, Table 6 Table 6 to Subpart MMMM of Part 60—Model Rule—Summary of Reporting Requirements for Existing Sewage Sludge Incineration Units a Report Due date Contents...

  11. 40 CFR Table 6 to Subpart Mmmm of... - Model Rule-Summary of Reporting Requirements for Existing Sewage Sludge Incineration Units a

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Requirements for Existing Sewage Sludge Incineration Units a 6 Table 6 to Subpart MMMM of Part 60 Protection of... Incineration Units Pt. 60, Subpt. MMMM, Table 6 Table 6 to Subpart MMMM of Part 60—Model Rule—Summary of Reporting Requirements for Existing Sewage Sludge Incineration Units a Report Due date Contents...

  12. A quantitative structure-activity relationship model for radical scavenging activity of flavonoids.

    PubMed

    Om, A; Kim, J H

    2008-03-01

    A quantitative structure-activity relationship (QSAR) study has been carried out for a training set of 29 flavonoids to correlate and predict the 1,1-diphenyl-2-picrylhydrazyl radical scavenging activity (RSA) values obtained from published data. Genetic algorithm and multiple linear regression were employed to select the descriptors and to generate the best prediction model that relates the structural features to the RSA activities using (1) three-dimensional (3D) Dragon (TALETE srl, Milan, Italy) descriptors and (2) semi-empirical descriptor calculations. The predictivity of the models was estimated by cross-validation with the leave-one-out method. The result showed that a significant improvement of the statistical indices was obtained by deleting outliers. Based on the data for the compounds used in this study, our results suggest a QSAR model of RSA that is based on the following descriptors: 3D-Morse, WHIM, and GETAWAY. Therefore, satisfactory relationships between RSA and the semi-empirical descriptors were found, demonstrating that the energy of the highest occupied molecular orbital, total energy, and energy of heat of formation contributed more significantly than all other descriptors.

  13. Disentangling the Complexity of HGF Signaling by Combining Qualitative and Quantitative Modeling.

    PubMed

    D'Alessandro, Lorenza A; Samaga, Regina; Maiwald, Tim; Rho, Seong-Hwan; Bonefas, Sandra; Raue, Andreas; Iwamoto, Nao; Kienast, Alexandra; Waldow, Katharina; Meyer, Rene; Schilling, Marcel; Timmer, Jens; Klamt, Steffen; Klingmüller, Ursula

    2015-04-01

    Signaling pathways are characterized by crosstalk, feedback and feedforward mechanisms giving rise to highly complex and cell-context specific signaling networks. Dissecting the underlying relations is crucial to predict the impact of targeted perturbations. However, a major challenge in identifying cell-context specific signaling networks is the enormous number of potentially possible interactions. Here, we report a novel hybrid mathematical modeling strategy to systematically unravel hepatocyte growth factor (HGF) stimulated phosphoinositide-3-kinase (PI3K) and mitogen activated protein kinase (MAPK) signaling, which critically contribute to liver regeneration. By combining time-resolved quantitative experimental data generated in primary mouse hepatocytes with interaction graph and ordinary differential equation modeling, we identify and experimentally validate a network structure that represents the experimental data best and indicates specific crosstalk mechanisms. Whereas the identified network is robust against single perturbations, combinatorial inhibition strategies are predicted that result in strong reduction of Akt and ERK activation. Thus, by capitalizing on the advantages of the two modeling approaches, we reduce the high combinatorial complexity and identify cell-context specific signaling networks.

  14. Disentangling the Complexity of HGF Signaling by Combining Qualitative and Quantitative Modeling

    PubMed Central

    Rho, Seong-Hwan; Bonefas, Sandra; Raue, Andreas; Iwamoto, Nao; Kienast, Alexandra; Waldow, Katharina; Meyer, Rene; Schilling, Marcel; Timmer, Jens; Klamt, Steffen; Klingmüller, Ursula

    2015-01-01

    Signaling pathways are characterized by crosstalk, feedback and feedforward mechanisms giving rise to highly complex and cell-context specific signaling networks. Dissecting the underlying relations is crucial to predict the impact of targeted perturbations. However, a major challenge in identifying cell-context specific signaling networks is the enormous number of potentially possible interactions. Here, we report a novel hybrid mathematical modeling strategy to systematically unravel hepatocyte growth factor (HGF) stimulated phosphoinositide-3-kinase (PI3K) and mitogen activated protein kinase (MAPK) signaling, which critically contribute to liver regeneration. By combining time-resolved quantitative experimental data generated in primary mouse hepatocytes with interaction graph and ordinary differential equation modeling, we identify and experimentally validate a network structure that represents the experimental data best and indicates specific crosstalk mechanisms. Whereas the identified network is robust against single perturbations, combinatorial inhibition strategies are predicted that result in strong reduction of Akt and ERK activation. Thus, by capitalizing on the advantages of the two modeling approaches, we reduce the high combinatorial complexity and identify cell-context specific signaling networks. PMID:25905717

  15. A quantitative model for using acridine orange as a transmembrane pH gradient probe.

    PubMed

    Clerc, S; Barenholz, Y

    1998-05-15

    Monitoring the acidification of the internal space of membrane vesicles by proton pumps can be achieved easily with optical probes. Transmembrane pH gradients cause a blue-shift in the absorbance spectrum and the quenching of the fluorescence of the cationic dye acridine orange. It has been postulated that these changes are caused by accumulation and aggregation of the dye inside the vesicles. We tested this hypothesis using liposomes with transmembrane concentration gradients of ammonium sulfate as model system. Fluorescence intensity of acridine orange solutions incubated with liposomes was affected by magnitude of the gradient, volume trapped by vesicles, and temperature. These experimental data were compared to a theoretical model describing the accumulation of acridine orange monomers in the vesicles according to the inside-to-outside ratio of proton concentrations, and the intravesicular formation of sandwich-like piles of acridine orange cations. This theoretical model predicted quantitatively the relationship between the transmembrane pH gradients and spectral changes of acridine orange. Therefore, adequate characterization of aggregation of dye in the lumen of biological vesicles provides the theoretical basis for using acridine orange as an optical probe to quantify transmembrane pH gradients.

  16. Disentangling the Complexity of HGF Signaling by Combining Qualitative and Quantitative Modeling.

    PubMed

    D'Alessandro, Lorenza A; Samaga, Regina; Maiwald, Tim; Rho, Seong-Hwan; Bonefas, Sandra; Raue, Andreas; Iwamoto, Nao; Kienast, Alexandra; Waldow, Katharina; Meyer, Rene; Schilling, Marcel; Timmer, Jens; Klamt, Steffen; Klingmüller, Ursula

    2015-04-01

    Signaling pathways are characterized by crosstalk, feedback and feedforward mechanisms giving rise to highly complex and cell-context specific signaling networks. Dissecting the underlying relations is crucial to predict the impact of targeted perturbations. However, a major challenge in identifying cell-context specific signaling networks is the enormous number of potentially possible interactions. Here, we report a novel hybrid mathematical modeling strategy to systematically unravel hepatocyte growth factor (HGF) stimulated phosphoinositide-3-kinase (PI3K) and mitogen activated protein kinase (MAPK) signaling, which critically contribute to liver regeneration. By combining time-resolved quantitative experimental data generated in primary mouse hepatocytes with interaction graph and ordinary differential equation modeling, we identify and experimentally validate a network structure that represents the experimental data best and indicates specific crosstalk mechanisms. Whereas the identified network is robust against single perturbations, combinatorial inhibition strategies are predicted that result in strong reduction of Akt and ERK activation. Thus, by capitalizing on the advantages of the two modeling approaches, we reduce the high combinatorial complexity and identify cell-context specific signaling networks. PMID:25905717

  17. Joint prediction of multiple quantitative traits using a Bayesian multivariate antedependence model

    PubMed Central

    Jiang, J; Zhang, Q; Ma, L; Li, J; Wang, Z; Liu, J-F

    2015-01-01

    Predicting organismal phenotypes from genotype data is important for preventive and personalized medicine as well as plant and animal breeding. Although genome-wide association studies (GWAS) for complex traits have discovered a large number of trait- and disease-associated variants, phenotype prediction based on associated variants is usually in low accuracy even for a high-heritability trait because these variants can typically account for a limited fraction of total genetic variance. In comparison with GWAS, the whole-genome prediction (WGP) methods can increase prediction accuracy by making use of a huge number of variants simultaneously. Among various statistical methods for WGP, multiple-trait model and antedependence model show their respective advantages. To take advantage of both strategies within a unified framework, we proposed a novel multivariate antedependence-based method for joint prediction of multiple quantitative traits using a Bayesian algorithm via modeling a linear relationship of effect vector between each pair of adjacent markers. Through both simulation and real-data analyses, our studies demonstrated that the proposed antedependence-based multiple-trait WGP method is more accurate and robust than corresponding traditional counterparts (Bayes A and multi-trait Bayes A) under various scenarios. Our method can be readily extended to deal with missing phenotypes and resequence data with rare variants, offering a feasible way to jointly predict phenotypes for multiple complex traits in human genetic epidemiology as well as plant and livestock breeding. PMID:25873147

  18. Joint prediction of multiple quantitative traits using a Bayesian multivariate antedependence model.

    PubMed

    Jiang, J; Zhang, Q; Ma, L; Li, J; Wang, Z; Liu, J-F

    2015-07-01

    Predicting organismal phenotypes from genotype data is important for preventive and personalized medicine as well as plant and animal breeding. Although genome-wide association studies (GWAS) for complex traits have discovered a large number of trait- and disease-associated variants, phenotype prediction based on associated variants is usually in low accuracy even for a high-heritability trait because these variants can typically account for a limited fraction of total genetic variance. In comparison with GWAS, the whole-genome prediction (WGP) methods can increase prediction accuracy by making use of a huge number of variants simultaneously. Among various statistical methods for WGP, multiple-trait model and antedependence model show their respective advantages. To take advantage of both strategies within a unified framework, we proposed a novel multivariate antedependence-based method for joint prediction of multiple quantitative traits using a Bayesian algorithm via modeling a linear relationship of effect vector between each pair of adjacent markers. Through both simulation and real-data analyses, our studies demonstrated that the proposed antedependence-based multiple-trait WGP method is more accurate and robust than corresponding traditional counterparts (Bayes A and multi-trait Bayes A) under various scenarios. Our method can be readily extended to deal with missing phenotypes and resequence data with rare variants, offering a feasible way to jointly predict phenotypes for multiple complex traits in human genetic epidemiology as well as plant and livestock breeding.

  19. Interpretable, probability-based confidence metric for continuous quantitative structure-activity relationship models.

    PubMed

    Keefer, Christopher E; Kauffman, Gregory W; Gupta, Rishi Raj

    2013-02-25

    A great deal of research has gone into the development of robust confidence in prediction and applicability domain (AD) measures for quantitative structure-activity relationship (QSAR) models in recent years. Much of the attention has historically focused on structural similarity, which can be defined in many forms and flavors. A concept that is frequently overlooked in the realm of the QSAR applicability domain is how the local activity landscape plays a role in how accurate a prediction is or is not. In this work, we describe an approach that pairs information about both the chemical similarity and activity landscape of a test compound's neighborhood into a single calculated confidence value. We also present an approach for converting this value into an interpretable confidence metric that has a simple and informative meaning across data sets. The approach will be introduced to the reader in the context of models built upon four diverse literature data sets. The steps we will outline include the definition of similarity used to determine nearest neighbors (NN), how we incorporate the NN activity landscape with a similarity-weighted root-mean-square distance (wRMSD) value, and how that value is then calibrated to generate an intuitive confidence metric for prospective application. Finally, we will illustrate the prospective performance of the approach on five proprietary models whose predictions and confidence metrics have been tracked for more than a year.

  20. Switching mechanism for TiO2 memristor and quantitative analysis of exponential model parameters

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Ping; Chen, Min; Shen, Yi

    2015-08-01

    The memristor, as the fourth basic circuit element, has drawn worldwide attention since its physical implementation was released by HP Labs in 2008. However, at the nano-scale, there are many difficulties for memristor physical realization. So a better understanding and analysis of a good model will help us to study the characteristics of a memristor. In this paper, we analyze a possible mechanism for the switching behavior of a memristor with a Pt/TiO2/Pt structure, and explain the changes of electronic barrier at the interface of Pt/TiO2. Then, a quantitative analysis about each parameter in the exponential model of memristor is conducted based on the calculation results. The analysis results are validated by simulation results. The efforts made in this paper will provide researchers with theoretical guidance on choosing appropriate values for (α, β, χ, γ) in this exponential model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61374150 and 61374171), the State Key Program of the National Natural Science Foundation of China (Grant No. 61134012), the National Basic Research Program of China (Grant No. 2011CB710606), and the Fundamental Research Funds for the Central Universities, China (Grant No. 2013TS126).

  1. Effect of Arterial Deprivation on Growing Femoral Epiphysis: Quantitative Magnetic Resonance Imaging Using a Piglet Model

    PubMed Central

    Cheon, Jung-Eun; Kim, In-One; Kim, Woo Sun; Choi, Young Hun

    2015-01-01

    Objective To investigate the usefulness of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion MRI for the evaluation of femoral head ischemia. Materials and Methods Unilateral femoral head ischemia was induced by selective embolization of the medial circumflex femoral artery in 10 piglets. All MRIs were performed immediately (1 hour) and after embolization (1, 2, and 4 weeks). Apparent diffusion coefficients (ADCs) were calculated for the femoral head. The estimated pharmacokinetic parameters (Kep and Ve from two-compartment model) and semi-quantitative parameters including peak enhancement, time-to-peak (TTP), and contrast washout were evaluated. Results The epiphyseal ADC values of the ischemic hip decreased immediately (1 hour) after embolization. However, they increased rapidly at 1 week after embolization and remained elevated until 4 weeks after embolization. Perfusion MRI of ischemic hips showed decreased epiphyseal perfusion with decreased Kep immediately after embolization. Signal intensity-time curves showed delayed TTP with limited contrast washout immediately post-embolization. At 1-2 weeks after embolization, spontaneous reperfusion was observed in ischemic epiphyses. The change of ADC (p = 0.043) and Kep (p = 0.043) were significantly different between immediate (1 hour) after embolization and 1 week post-embolization. Conclusion Diffusion MRI and pharmacokinetic model obtained from the DCE-MRI are useful in depicting early changes of perfusion and tissue damage using the model of femoral head ischemia in skeletally immature piglets. PMID:25995692

  2. 77 FR 41985 - Use of Influenza Disease Models To Quantitatively Evaluate the Benefits and Risks of Vaccines: A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-17

    ... model used by the Center for Biologics Evaluation and Research (CBER) and suggestions for further...: Richard Forshee, Center for Biologics Evaluation and Research (HFM-210), Food and Drug Administration... disease computer simulation models to generate quantitative estimates of the benefits and risks...

  3. Quantitative models of hydrothermal fluid-mineral reaction: The Ischia case

    NASA Astrophysics Data System (ADS)

    Di Napoli, Rossella; Federico, Cinzia; Aiuppa, Alessandro; D'Antonio, Massimo; Valenza, Mariano

    2013-03-01

    The intricate pathways of fluid-mineral reactions occurring underneath active hydrothermal systems are explored in this study by applying reaction path modelling to the Ischia case study. Ischia Island, in Southern Italy, hosts a well-developed and structurally complex hydrothermal system which, because of its heterogeneity in chemical and physical properties, is an ideal test sites for evaluating potentialities/limitations of quantitative geochemical models of hydrothermal reactions. We used the EQ3/6 software package, version 7.2b, to model reaction of infiltrating waters (mixtures of meteoric water and seawater in variable proportions) with Ischia's reservoir rocks (the Mount Epomeo Green Tuff units; MEGT). The mineral assemblage and composition of such MEGT units were initially characterised by ad hoc designed optical microscopy and electron microprobe analysis, showing that phenocrysts (dominantly alkali-feldspars and plagioclase) are set in a pervasively altered (with abundant clay minerals and zeolites) groundmass. Reaction of infiltrating waters with MEGT minerals was simulated over a range of realistic (for Ischia) temperatures (95-260 °C) and CO2 fugacities (10-0.2 to 100.5) bar. During the model runs, a set of secondary minerals (selected based on independent information from alteration minerals' studies) was allowed to precipitate from model solutions, when saturation was achieved. The compositional evolution of model solutions obtained in the 95-260 °C runs were finally compared with compositions of Ischia's thermal groundwaters, demonstrating an overall agreement. Our simulations, in particular, well reproduce the Mg-depleting maturation path of hydrothermal solutions, and have end-of-run model solutions whose Na-K-Mg compositions well reflect attainment of full-equilibrium conditions at run temperature. High-temperature (180-260 °C) model runs are those best matching the Na-K-Mg compositions of Ischia's most chemically mature water samples

  4. Analysis of Water Conflicts across Natural and Societal Boundaries: Integration of Quantitative Modeling and Qualitative Reasoning

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Balaram, P.; Islam, S.

    2009-12-01

    , the knowledge generated from these studies cannot be easily generalized or transferred to other basins. Here, we present an approach to integrate the quantitative and qualitative methods to study water issues and capture the contextual knowledge of water management- by combining the NSSs framework and an area of artificial intelligence called qualitative reasoning. Using the Apalachicola-Chattahoochee-Flint (ACF) River Basin dispute as an example, we demonstrate how quantitative modeling and qualitative reasoning can be integrated to examine the impact of over abstraction of water from the river on the ecosystem and the role of governance in shaping the evolution of the ACF water dispute.

  5. Parametric modeling for quantitative analysis of pulmonary structure to function relationships

    NASA Astrophysics Data System (ADS)

    Haider, Clifton R.; Bartholmai, Brian J.; Holmes, David R., III; Camp, Jon J.; Robb, Richard A.

    2005-04-01

    While lung anatomy is well understood, pulmonary structure-to-function relationships such as the complex elastic deformation of the lung during respiration are less well documented. Current methods for studying lung anatomy include conventional chest radiography, high-resolution computed tomography (CT scan) and magnetic resonance imaging with polarized gases (MRI scan). Pulmonary physiology can be studied using spirometry or V/Q nuclear medicine tests (V/Q scan). V/Q scanning and MRI scans may demonstrate global and regional function. However, each of these individual imaging methods lacks the ability to provide high-resolution anatomic detail, associated pulmonary mechanics and functional variability of the entire respiratory cycle. Specifically, spirometry provides only a one-dimensional gross estimate of pulmonary function, and V/Q scans have poor spatial resolution, reducing its potential for regional assessment of structure-to-function relationships. We have developed a method which utilizes standard clinical CT scanning to provide data for computation of dynamic anatomic parametric models of the lung during respiration which correlates high-resolution anatomy to underlying physiology. The lungs are segmented from both inspiration and expiration three-dimensional (3D) data sets and transformed into a geometric description of the surface of the lung. Parametric mapping of lung surface deformation then provides a visual and quantitative description of the mechanical properties of the lung. Any alteration in lung mechanics is manifest by alterations in normal deformation of the lung wall. The method produces a high-resolution anatomic and functional composite picture from sparse temporal-spatial methods which quantitatively illustrates detailed anatomic structure to pulmonary function relationships impossible for translational methods to provide.

  6. QSTR modeling for qualitative and quantitative toxicity predictions of diverse chemical pesticides in honey bee for regulatory purposes.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Basant, Nikita; Mohan, Dinesh

    2014-09-15

    Pesticides are designed toxic chemicals for specific purposes and can harm nontarget species as well. The honey bee is considered a nontarget test species for toxicity evaluation of chemicals. Global QSTR (quantitative structure-toxicity relationship) models were established for qualitative and quantitative toxicity prediction of pesticides in honey bee (Apis mellifera) based on the experimental toxicity data of 237 structurally diverse pesticides. Structural diversity of the chemical pesticides and nonlinear dependence in the toxicity data were evaluated using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) QSTR models were constructed for classification (two and four categories) and function optimization problems using the toxicity end point in honey bees. The predictive power of the QSTR models was tested through rigorous validation performed using the internal and external procedures employing a wide series of statistical checks. In complete data, the PNN-QSTR model rendered a classification accuracy of 96.62% (two-category) and 95.57% (four-category), while the GRNN-QSTR model yielded a correlation (R(2)) of 0.841 between the measured and predicted toxicity values with a mean squared error (MSE) of 0.22. The results suggest the appropriateness of the developed QSTR models for reliably predicting qualitative and quantitative toxicities of pesticides in honey bee. Both the PNN and GRNN based QSTR models constructed here can be useful tools in predicting the qualitative and quantitative toxicities of the new chemical pesticides for regulatory purposes.

  7. Quantitative modeling of total ionizing dose reliability effects in device silicon dioxide layers

    NASA Astrophysics Data System (ADS)

    Rowsey, Nicole L.

    The electrical breakdown of oxides and oxide/semiconductor interfaces is one of the main reasons for device failure in integrated circuits, especially devices under high-stress conditions. One high-stress environment of interest is the space environment. All electronics are vulnerable to ionizing radiation; any high-energy particle that passes through an insulating layer will deposit unwanted charge there, causing shifts in device characteristics. Designing electronics for use in space can be a challenge, because much more energetic radiation exits in space than on Earth, as there is no atmosphere in space to collide with, and thereby reduce the energy of, energetic particles. Although oxide charging due to ionizing radiation creates well-known changes in device characteristics, or total ionizing dose effects, it is still poorly-understood exactly how these changes come about. There are many theories that draw upon a large body of both experimental work and, more recently, quantum-mechanical first principles calculations at the molecular level. This work uses FLOODS, a 3D object-oriented device simulator with multi-physics capability, to investigate these theories, by simulating oxide degradation in realistic device geometries, and comparing the subsequent degradation in device characteristics to experimental measurements. The charge trapping and defect-modulated transport models developed and implemented here have resulted in the first quantitative account of the enhanced low-dose-rate sensitivity effect, and are applicable in a comprehensive range of hydrogen environments. Measurements show that devices exposed to ionizing radiation at high dose rates exhibit less degradation that those exposed at low dose rates. Furthermore, the observed trend differs depending on the amount of hydrogen available before, during, and after irradiation. It is therefore important to understand and take into account the effects of dose rate and hydrogen when developing accelerated

  8. Physiologically Based Pharmacokinetic Modeling Framework for Quantitative Prediction of an Herb–Drug Interaction

    PubMed Central

    Brantley, S J; Gufford, B T; Dua, R; Fediuk, D J; Graf, T N; Scarlett, Y V; Frederick, K S; Fisher, M B; Oberlies, N H; Paine, M F

    2014-01-01

    Herb–drug interaction predictions remain challenging. Physiologically based pharmacokinetic (PBPK) modeling was used to improve prediction accuracy of potential herb–drug interactions using the semipurified milk thistle preparation, silibinin, as an exemplar herbal product. Interactions between silibinin constituents and the probe substrates warfarin (CYP2C9) and midazolam (CYP3A) were simulated. A low silibinin dose (160 mg/day × 14 days) was predicted to increase midazolam area under the curve (AUC) by 1%, which was corroborated with external data; a higher dose (1,650 mg/day × 7 days) was predicted to increase midazolam and (S)-warfarin AUC by 5% and 4%, respectively. A proof-of-concept clinical study confirmed minimal interaction between high-dose silibinin and both midazolam and (S)-warfarin (9 and 13% increase in AUC, respectively). Unexpectedly, (R)-warfarin AUC decreased (by 15%), but this is unlikely to be clinically important. Application of this PBPK modeling framework to other herb–drug interactions could facilitate development of guidelines for quantitative prediction of clinically relevant interactions. PMID:24670388

  9. Overcoming pain thresholds with multilevel models-an example using quantitative sensory testing (QST) data.

    PubMed

    Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris

    2015-01-01

    The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice. PMID:26557435

  10. Probabilistic Quantitative Precipitation Forecasting over East China using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Yang, Ai; Yuan, Huiling

    2014-05-01

    The Bayesian model averaging (BMA) is a post-processing method that weights the predictive probability density functions (PDFs) of individual ensemble members. This study investigates the BMA method for calibrating quantitative precipitation forecasts (QPFs) from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database. The QPFs over East Asia during summer (June-August) 2008-2011 are generated from six operational ensemble prediction systems (EPSs), including ECMWF, UKMO, NCEP, CMC, JMA, CMA, and multi-center ensembles of their combinations. The satellite-based precipitation estimate product TRMM 3B42 V7 is used as the verification dataset. In the BMA post-processing for precipitation forecasts, the PDF matching method is first applied to bias-correct systematic errors in each forecast member, by adjusting PDFs of forecasts to match PDFs of observations. Next, a logistic regression and two-parameter gamma distribution are used to fit the probability of rainfall occurrence and precipitation distribution. Through these two steps, the BMA post-processing bias-corrects ensemble forecasts systematically. The 60-70% cumulative density function (CDF) predictions well estimate moderate precipitation compared to raw ensemble mean, while the 90% upper boundary of BMA CDF predictions can be set as a threshold of extreme precipitation alarm. In general, the BMA method is more capable of multi-center ensemble post-processing, which improves probabilistic QPFs (PQPFs) with better ensemble spread and reliability. KEYWORDS: Bayesian model averaging (BMA); post-processing; ensemble forecast; TIGGE

  11. Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.

  12. Bayesian models with dominance effects for genomic evaluation of quantitative traits.

    PubMed

    Wellmann, Robin; Bennewitz, Jörn

    2012-02-01

    Genomic selection refers to the use of dense, genome-wide markers for the prediction of breeding values (BV) and subsequent selection of breeding individuals. It has become a standard tool in livestock and plant breeding for accelerating genetic gain. The core of genomic selection is the prediction of a large number of marker effects from a limited number of observations. Various Bayesian methods that successfully cope with this challenge are known. Until now, the main research emphasis has been on additive genetic effects. Dominance coefficients of quantitative trait loci (QTLs), however, can also be large, even if dominance variance and inbreeding depression are relatively small. Considering dominance might contribute to the accuracy of genomic selection and serve as a guide for choosing mating pairs with good combining abilities. A general hierarchical Bayesian model for genomic selection that can realistically account for dominance is introduced. Several submodels are proposed and compared with respect to their ability to predict genomic BV, dominance deviations and genotypic values (GV) by stochastic simulation. These submodels differ in the way the dependency between additive and dominance effects is modelled. Depending on the marker panel, the inclusion of dominance effects increased the accuracy of GV by about 17% and the accuracy of genomic BV by 2% in the offspring. Furthermore, it slowed down the decrease of the accuracies in subsequent generations. It was possible to obtain accurate estimates of GV, which enables mate selection programmes.

  13. How plants manage food reserves at night: quantitative models and open questions

    PubMed Central

    Scialdone, Antonio; Howard, Martin

    2015-01-01

    In order to cope with night-time darkness, plants during the day allocate part of their photosynthate for storage, often as starch. This stored reserve is then degraded at night to sustain metabolism and growth. However, night-time starch degradation must be tightly controlled, as over-rapid turnover results in premature depletion of starch before dawn, leading to starvation. Recent experiments in Arabidopsis have shown that starch degradation proceeds at a constant rate during the night and is set such that starch reserves are exhausted almost precisely at dawn. Intriguingly, this pattern is robust with the degradation rate being adjusted to compensate for unexpected changes in the time of darkness onset. While a fundamental role for the circadian clock is well-established, the underlying mechanisms controlling starch degradation remain poorly characterized. Here, we discuss recent quantitative models that have been proposed to explain how plants can compute the appropriate starch degradation rate, a process that requires an effective arithmetic division calculation. We review experimental confirmation of the models, and describe aspects that require further investigation. Overall, the process of night-time starch degradation necessitates a fundamental metabolic role for the circadian clock and, more generally, highlights how cells process information in order to optimally manage their resources. PMID:25873925

  14. A probabilistic quantitative risk assessment model for the long-term work zone crashes.

    PubMed

    Meng, Qiang; Weng, Jinxian; Qu, Xiaobo

    2010-11-01

    Work zones especially long-term work zones increase traffic conflicts and cause safety problems. Proper casualty risk assessment for a work zone is of importance for both traffic safety engineers and travelers. This paper develops a novel probabilistic quantitative risk assessment (QRA) model to evaluate the casualty risk combining frequency and consequence of all accident scenarios triggered by long-term work zone crashes. The casualty risk is measured by the individual risk and societal risk. The individual risk can be interpreted as the frequency of a driver/passenger being killed or injured, and the societal risk describes the relation between frequency and the number of casualties. The proposed probabilistic QRA model consists of the estimation of work zone crash frequency, an event tree and consequence estimation models. There are seven intermediate events--age (A), crash unit (CU), vehicle type (VT), alcohol (AL), light condition (LC), crash type (CT) and severity (S)--in the event tree. Since the estimated value of probability for some intermediate event may have large uncertainty, the uncertainty can thus be characterized by a random variable. The consequence estimation model takes into account the combination effects of speed and emergency medical service response time (ERT) on the consequence of work zone crash. Finally, a numerical example based on the Southeast Michigan work zone crash data is carried out. The numerical results show that there will be a 62% decrease of individual fatality risk and 44% reduction of individual injury risk if the mean travel speed is slowed down by 20%. In addition, there will be a 5% reduction of individual fatality risk and 0.05% reduction of individual injury risk if ERT is reduced by 20%. In other words, slowing down speed is more effective than reducing ERT in the casualty risk mitigation.

  15. Quantitative Phosphoproteomics Reveals Wee1 Kinase as a Therapeutic Target in a Model of Proneural Glioblastoma.

    PubMed

    Lescarbeau, Rebecca S; Lei, Liang; Bakken, Katrina K; Sims, Peter A; Sarkaria, Jann N; Canoll, Peter; White, Forest M

    2016-06-01

    Glioblastoma (GBM) is the most common malignant primary brain cancer. With a median survival of about a year, new approaches to treating this disease are necessary. To identify signaling molecules regulating GBM progression in a genetically engineered murine model of proneural GBM, we quantified phosphotyrosine-mediated signaling using mass spectrometry. Oncogenic signals, including phosphorylated ERK MAPK, PI3K, and PDGFR, were found to be increased in the murine tumors relative to brain. Phosphorylation of CDK1 pY15, associated with the G2 arrest checkpoint, was identified as the most differentially phosphorylated site, with a 14-fold increase in phosphorylation in the tumors. To assess the role of this checkpoint as a potential therapeutic target, syngeneic primary cell lines derived from these tumors were treated with MK-1775, an inhibitor of Wee1, the kinase responsible for CDK1 Y15 phosphorylation. MK-1775 treatment led to mitotic catastrophe, as defined by increased DNA damage and cell death by apoptosis. To assess the extensibility of targeting Wee1/CDK1 in GBM, patient-derived xenograft (PDX) cell lines were also treated with MK-1775. Although the response was more heterogeneous, on-target Wee1 inhibition led to decreased CDK1 Y15 phosphorylation and increased DNA damage and apoptosis in each line. These results were also validated in vivo, where single-agent MK-1775 demonstrated an antitumor effect on a flank PDX tumor model, increasing mouse survival by 1.74-fold. This study highlights the ability of unbiased quantitative phosphoproteomics to reveal therapeutic targets in tumor models, and the potential for Wee1 inhibition as a treatment approach in preclinical models of GBM. Mol Cancer Ther; 15(6); 1332-43. ©2016 AACR. PMID:27196784

  16. Quantitative and Qualitative Models in Support of the Supraluminal Model of Pulsar Emission

    NASA Astrophysics Data System (ADS)

    Singleton, John; Schmidt, A.; Middleditch, J.; Ardavan, H.; Ardavan, A.

    2012-01-01

    Maxwell's equations establish that patterns of electric charges and currents can be animated to travel faster than the speed of light in vacuo, and that these supraluminal distribution patterns emit tightly focused packets of electromagnetic radiation that are fundamentally different from the emissions by previously known terrestrial radiation sources. Since a pattern of electric polarization is not bound to charged particles (though effected by them), it can be made to move faster than light. Recent theoretical work, data gathered from ground-based astrophysics experiments, and the analysis of pulsar observational data all strongly suggest supraluminal polarization currents whose distribution pattern follows a circular orbit as the mechanism of pulsar radiation. Here we present numerical calculations of the radiation field generated by a localized charge - as well as ``bunches'' of such charges - in supraluminal rotation and compare our studies to astronomical observations of rapidly spinning, highly magnetized stellar remnants. We find that the radiated field has the following intrinsic characteristics: (i) it is sharply focused along a rigidly rotating spiral-shaped beam, (ii) it consists of either one or three concurrent polarization modes (depending on the relative position of the observer) that constitute contributions to the field from differing retarded times, (iii) it is highly elliptically polarized, (iv) the position angles of each of its linearly polarized modes swings across the beam by as much as 180 degrees, and (v) the position angles of two of its modes remain approximately orthogonal throughout their excursion across the beam. Our findings show that virtually all of the enigmatic features of pulsar radiation - the polarization properties, image structure and apparent radiation temperature as well as peak spectral frequencies - can be explained using a single, elegant model with few input parameters and no external assumptions.

  17. Quantitative structure activity relationship model for predicting the depletion percentage of skin allergic chemical substances of glutathione.

    PubMed

    Si, Hongzong; Wang, Tao; Zhang, Kejun; Duan, Yun-Bo; Yuan, Shuping; Fu, Aiping; Hu, Zhide

    2007-05-22

    A quantitative model was developed to predict the depletion percentage of glutathione (DPG) compounds by gene expression programming (GEP). Each kind of compound was represented by several calculated structural descriptors involving constitutional, topological, geometrical, electrostatic and quantum-chemical features of compounds. The GEP method produced a nonlinear and five-descriptor quantitative model with a mean error and a correlation coefficient of 10.52 and 0.94 for the training set, 22.80 and 0.85 for the test set, respectively. It is shown that the GEP predicted results are in good agreement with experimental ones, better than those of the heuristic method.

  18. The use of electromagnetic induction methods for establishing quantitative permafrost models in West Greenland

    NASA Astrophysics Data System (ADS)

    Ingeman-Nielsen, Thomas; Brandt, Inooraq

    2010-05-01

    permafrozen sediments is generally not available in Greenland, and mobilization costs are therefore considerable thus limiting the use of geotechnical borings to larger infrastructure and construction projects. To overcome these problems, we have tested the use of shallow Transient ElectroMagnetic (TEM) measurements, to provide constraints in terms of depth to and resistivity of the conductive saline layer. We have tested such a setup at two field sites in the Ilulissat area (mid-west Greenland), one with available borehole information (site A), the second without (site C). VES and TEM soundings were collected at each site and the respective data sets subsequently inverted using a mutually constrained inversion scheme. At site A, the TEM measurements (20x20m square loop, in-loop configuration) show substantial and repeatable negative amplitude segments, and therefore it has not presently been possible to provide a quantitative interpretation for this location. Negative segments are typically a sign of Induced Polarization or cultural effects. Forward modeling based on inversion of the VES data constrained with borehole information has indicated that IP effects could indeed be the cause of the observed anomaly, although such effects are not normally expected in permafrost or saline deposits. Data from site C has shown that jointly inverting the TEM and VES measurements does provide well determined estimates for all layer parameters except the thickness of the active layer and resistivity of the bedrock. The active layer thickness may be easily probed to provide prior information on this parameter, and the bedrock resistivity is of limited interest in technical applications. Although no confirming borehole information is available at this site, these results indicate that joint or mutually constrained inversion of TEM and VES data is feasible and that this setup may provide a fast and cost effective method for establishing quantitative interpretations of permafrost structure in

  19. Estimation of glacial outburst floods in Himalayan watersheds by means of quantitative modelling

    NASA Astrophysics Data System (ADS)

    Brauner, M.; Agner, P.; Vogl, A.; Leber, D.; Haeusler, H.; Wangda, D.

    2003-04-01

    In the Himalayas intense glacier retreat rates and at quickly developing settlement activity in the downstream valleys, result in dramatically increasing Glacier Lake Outburst risk. As settlement activity concentrates on broad and productive valley areas, being typically 10 to 70 kilometres downstream of the flood source, hazard awareness and preparedness is limited. Therefore application of quantitative assessment methodology is crucial in order to delineate flood prone areas and develop hazard preparedness concepts by means of scenario modelling. For dam breach back-calculation the 1D-simulation tool BREACH is utilised. Generally the initiation by surge waves and the broad sediment size spectrum of tills are difficult to implement. Therefore a tool with long application history has been chosen. The flood propagation is simulated with the 2D-hydraulic simulation model FLO2D which enables water flood and sediment load routing. In three Himalayan watersheds (Pho Chhu valley, Bhutan; Tam Pokhari valley, Nepal) recent Glacier Lake Outbursts (each with more than 20 mill m3 volume) and consecutive floods are simulated and calibrated by means of multi-time morpho-logical information, high water marks, geomorphologic interpretation and eye witness consultation. These calculations show that for these events the dam breach process was slow (within 0.75 to 3 hours), with low flood hydrographs. The flood propagation was governed by a sequence of low-sloping, depositional channel sections, and steep channel section with intense lateral sediment mobilisation and temporary blockage. This resulted in a positive feedback and prolonged the flood. By means of sensitivity analysis the influence of morphological changes during the events and the imporance of the dam breach process to the whole is estimated. It can be shown, that the accuracy of the high water limit is governed by the following processes: sediment mobilisation, breaching process, water volume, morphological changes

  20. Chronic Spinal Compression Model in Minipigs: A Systematic Behavioral, Qualitative, and Quantitative Neuropathological Study

    PubMed Central

    Navarro, Roman; Juhas, Stefan; Keshavarzi, Sassan; Juhasova, Jana; Motlik, Jan; Johe, Karl; Marsala, Silvia; Scadeng, Miriam; Lazar, Peter; Tomori, Zoltan; Schulteis, Gery; Beattie, Michael; Ciacci, Joseph D.

    2012-01-01

    Abstract The goal of the present study was to develop a porcine spinal cord injury (SCI) model, and to describe the neurological outcome and characterize the corresponding quantitative and qualitative histological changes at 4–9 months after injury. Adult Gottingen-Minnesota minipigs were anesthetized and placed in a spine immobilization frame. The exposed T12 spinal segment was compressed in a dorso-ventral direction using a 5-mm-diameter circular bar with a progressively increasing peak force (1.5, 2.0, or 2.5 kg) at a velocity of 3 cm/sec. During recovery, motor and sensory function were periodically monitored. After survival, the animals were perfusion fixed and the extent of local SCI was analyzed by (1) post-mortem MRI analysis of dissected spinal cords, (2) qualitative and quantitative analysis of axonal survival at the epicenter of injury, and (3) defining the presence of local inflammatory changes, astrocytosis, and schwannosis. Following 2.5-kg spinal cord compression the animals demonstrated a near complete loss of motor and sensory function with no recovery over the next 4–9 months. Those that underwent spinal cord compression with 2 kg force developed an incomplete injury with progressive partial neurological recovery characterized by a restricted ability to stand and walk. Animals injured with a spinal compression force of 1.5 kg showed near normal ambulation 10 days after injury. In fully paralyzed animals (2.5 kg), MRI analysis demonstrated a loss of spinal white matter integrity and extensive septal cavitations. A significant correlation between the magnitude of loss of small and medium-sized myelinated axons in the ventral funiculus and neurological deficits was identified. These data, demonstrating stable neurological deficits in severely injured animals, similarities of spinal pathology to humans, and relatively good post-injury tolerance of this strain of minipigs to spinal trauma, suggest that this model can successfully be used

  1. A rodent model of traumatic stress induces lasting sleep and quantitative electroencephalographic disturbances.

    PubMed

    Nedelcovych, Michael T; Gould, Robert W; Zhan, Xiaoyan; Bubser, Michael; Gong, Xuewen; Grannan, Michael; Thompson, Analisa T; Ivarsson, Magnus; Lindsley, Craig W; Conn, P Jeffrey; Jones, Carrie K

    2015-03-18

    Hyperarousal and sleep disturbances are common, debilitating symptoms of post-traumatic stress disorder (PTSD). PTSD patients also exhibit abnormalities in quantitative electroencephalography (qEEG) power spectra during wake as well as rapid eye movement (REM) and non-REM (NREM) sleep. Selective serotonin reuptake inhibitors (SSRIs), the first-line pharmacological treatment for PTSD, provide modest remediation of the hyperarousal symptoms in PTSD patients, but have little to no effect on the sleep-wake architecture deficits. Development of novel therapeutics for these sleep-wake architecture deficits is limited by a lack of relevant animal models. Thus, the present study investigated whether single prolonged stress (SPS), a rodent model of traumatic stress, induces PTSD-like sleep-wake and qEEG spectral power abnormalities that correlate with changes in central serotonin (5-HT) and neuropeptide Y (NPY) signaling in rats. Rats were implanted with telemetric recording devices to continuously measure EEG before and after SPS treatment. A second cohort of rats was used to measure SPS-induced changes in plasma corticosterone, 5-HT utilization, and NPY expression in brain regions that comprise the neural fear circuitry. SPS caused sustained dysregulation of NREM and REM sleep, accompanied by state-dependent alterations in qEEG power spectra indicative of cortical hyperarousal. These changes corresponded with acute induction of the corticosterone receptor co-chaperone FK506-binding protein 51 and delayed reductions in 5-HT utilization and NPY expression in the amygdala. SPS represents a preclinical model of PTSD-related sleep-wake and qEEG disturbances with underlying alterations in neurotransmitter systems known to modulate both sleep-wake architecture and the neural fear circuitry.

  2. Quantitative Evaluation of Models for Solvent-based, On-column Focusing in Liquid Chromatography

    PubMed Central

    Groskreutz, Stephen R.; Weber, Stephen G.

    2015-01-01

    On-column focusing or preconcentration is a well-known approach to increase concentration sensitivity by generating transient conditions during the injection that result in high solute retention. Preconcentration results from two phenomena: 1) solutes are retained as they enter the column. Their velocities are k′-dependent and lower than the mobile phase velocity and 2) zones are compressed due to the step-gradient resulting from the higher elution strength mobile phase passing through the solute zones. Several workers have derived the result that the ratio of the eluted zone width (in time) to the injected time width is the ratio k2/k1 where k1 is the retention factor of a solute in the sample solvent and k2 is the retention factor in the mobile phase (isocratic). Mills et al. proposed a different factor. To date, neither of the models has been adequately tested. The goal of this work was to evaluate quantitatively these two models. We used n-alkyl esters of p-hydroxybenzoic acid (parabens) as solutes. By making large injections to create obvious volume overload, we could measure accurately the ratio of widths (eluted/injected) over a range of values of k1 and k2. The Mills et al. model does not fit the data. The data are in general agreement with the factor k2/k1, but focusing is about 10% better than the prediction. We attribute the extra focusing to the fact that the second, compression, phenomenon provides a narrower zone than that expected for the passage of a step gradient through the zone. PMID:26210110

  3. Quantitative modeling of virus evolutionary dynamics and adaptation in serial passages using empirically inferred fitness landscapes.

    PubMed

    Woo, Hyung Jun; Reifman, Jaques

    2014-01-01

    We describe a stochastic virus evolution model representing genomic diversification and within-host selection during experimental serial passages under cell culture or live-host conditions. The model incorporates realistic descriptions of the virus genotypes in nucleotide and amino acid sequence spaces, as well as their diversification from error-prone replications. It quantitatively considers factors such as target cell number, bottleneck size, passage period, infection and cell death rates, and the replication rate of different genotypes, allowing for systematic examinations of how their changes affect the evolutionary dynamics of viruses during passages. The relative probability for a viral population to achieve adaptation under a new host environment, quantified by the rate with which a target sequence frequency rises above 50%, was found to be most sensitive to factors related to sequence structure (distance from the wild type to the target) and selection strength (host cell number and bottleneck size). For parameter values representative of RNA viruses, the likelihood of observing adaptations during passages became negligible as the required number of mutations rose above two amino acid sites. We modeled the specific adaptation process of influenza A H5N1 viruses in mammalian hosts by simulating the evolutionary dynamics of H5 strains under the fitness landscape inferred from multiple sequence alignments of H3 proteins. In light of comparisons with experimental findings, we observed that the evolutionary dynamics of adaptation is strongly affected not only by the tendency toward increasing fitness values but also by the accessibility of pathways between genotypes constrained by the genetic code.

  4. A quantitative exposure model simulating human norovirus transmission during preparation of deli sandwiches.

    PubMed

    Stals, Ambroos; Jacxsens, Liesbeth; Baert, Leen; Van Coillie, Els; Uyttendaele, Mieke

    2015-03-01

    Human noroviruses (HuNoVs) are a major cause of food borne gastroenteritis worldwide. They are often transmitted via infected and shedding food handlers manipulating foods such as deli sandwiches. The presented study aimed to simulate HuNoV transmission during the preparation of deli sandwiches in a sandwich bar. A quantitative exposure model was developed by combining the GoldSim® and @Risk® software packages. Input data were collected from scientific literature and from a two week observational study performed at two sandwich bars. The model included three food handlers working during a three hour shift on a shared working surface where deli sandwiches are prepared. The model consisted of three components. The first component simulated the preparation of the deli sandwiches and contained the HuNoV reservoirs, locations within the model allowing the accumulation of NoV and the working of intervention measures. The second component covered the contamination sources being (1) the initial HuNoV contaminated lettuce used on the sandwiches and (2) HuNoV originating from a shedding food handler. The third component included four possible intervention measures to reduce HuNoV transmission: hand and surface disinfection during preparation of the sandwiches, hand gloving and hand washing after a restroom visit. A single HuNoV shedding food handler could cause mean levels of 43±18, 81±37 and 18±7 HuNoV particles present on the deli sandwiches, hands and working surfaces, respectively. Introduction of contaminated lettuce as the only source of HuNoV resulted in the presence of 6.4±0.8 and 4.3±0.4 HuNoV on the food and hand reservoirs. The inclusion of hand and surface disinfection and hand gloving as a single intervention measure was not effective in the model as only marginal reductions of HuNoV levels were noticeable in the different reservoirs. High compliance of hand washing after a restroom visit did reduce HuNoV presence substantially on all reservoirs. The

  5. Effect of platform, reference material, and quantification model on enumeration of Enterococcus by quantitative PCR methods

    EPA Science Inventory

    Quantitative polymerase chain reaction (qPCR) is increasingly being used for the quantitative detection of fecal indicator bacteria in beach water. QPCR allows for same-day health warnings, and its application is being considered as an optionn for recreational water quality testi...

  6. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... were identified for further literature review to assess their suitability for estimating demand-failure... ``Review of Quantitative Software Reliability Methods,'' BNL- 94047-2010 (ADAMS Accession No. ML102240566), documented a review of currently available quantitative software reliability methods (QSRMs) that can be...

  7. Simulation of water-use conservation scenarios for the Mississippi Delta using an existing regional groundwater flow model

    USGS Publications Warehouse

    Barlow, Jeannie R.B.; Clark, Brian R.

    2011-01-01

    The Mississippi River alluvial plain in northwestern Mississippi (referred to as the Delta), once a floodplain to the Mississippi River covered with hardwoods and marshland, is now a highly productive agricultural region of large economic importance to Mississippi. Water for irrigation is supplied primarily by the Mississippi River Valley alluvial aquifer, and although the alluvial aquifer has a large reserve, there is evidence that the current rate of water use from the alluvial aquifer is not sustainable. Using an existing regional groundwater flow model, conservation scenarios were developed for the alluvial aquifer underlying the Delta region in northwestern Mississippi to assess where the implementation of water-use conservation efforts would have the greatest effect on future water availability-either uniformly throughout the Delta, or focused on a cone of depression in the alluvial aquifer underlying the central part of the Delta. Five scenarios were simulated with the Mississippi Embayment Regional Aquifer Study groundwater flow model: (1) a base scenario in which water use remained constant at 2007 rates throughout the entire simulation; (2) a 5-percent 'Delta-wide' conservation scenario in which water use across the Delta was decreased by 5 percent; (3) a 5-percent 'cone-equivalent' conservation scenario in which water use within the area of the cone of depression was decreased by 11 percent (a volume equivalent to the 5-percent Delta-wide conservation scenario); (4) a 25-percent Delta-wide conservation scenario in which water use across the Delta was decreased by 25 percent; and (5) a 25-percent cone-equivalent conservation scenario in which water use within the area of the cone of depression was decreased by 55 percent (a volume equivalent to the 25-percent Delta-wide conservation scenario). The Delta-wide scenarios result in greater average water-level improvements (relative to the base scenario) for the entire Delta area than the cone

  8. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    PubMed

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical

  9. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  10. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  11. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  12. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  13. 40 CFR Table 3 to Subpart Bbbb of... - Model Rule-Class I Nitrogen Oxides Emission Limits for Existing Small Municipal Waste Combustion...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Model Rule-Class I Nitrogen Oxides... 3 to Subpart BBBB of Part 60—Model Rule—Class I Nitrogen Oxides Emission Limits for Existing Small... greater than 250 tons per day of municipal solid waste. See § 60.1940 for definitions. b Nitrogen...

  14. Model-independent quantitative measurement of nanomechanical oscillator vibrations using electron-microscope linescans

    SciTech Connect

    Wang, Huan; Fenton, J. C.; Chiatti, O.; Warburton, P. A.

    2013-07-15

    Nanoscale mechanical resonators are highly sensitive devices and, therefore, for application as highly sensitive mass balances, they are potentially superior to micromachined cantilevers. The absolute measurement of nanoscale displacements of such resonators remains a challenge, however, since the optical signal reflected from a cantilever whose dimensions are sub-wavelength is at best very weak. We describe a technique for quantitative analysis and fitting of scanning-electron microscope (SEM) linescans across a cantilever resonator, involving deconvolution from the vibrating resonator profile using the stationary resonator profile. This enables determination of the absolute amplitude of nanomechanical cantilever oscillations even when the oscillation amplitude is much smaller than the cantilever width. This technique is independent of any model of secondary-electron emission from the resonator and is, therefore, applicable to resonators with arbitrary geometry and material inhomogeneity. We demonstrate the technique using focussed-ion-beam–deposited tungsten cantilevers of radius ∼60–170 nm inside a field-emission SEM, with excitation of the cantilever by a piezoelectric actuator allowing measurement of the full frequency response. Oscillation amplitudes approaching the size of the primary electron-beam can be resolved. We further show that the optimum electron-beam scan speed is determined by a compromise between deflection of the cantilever at low scan speeds and limited spatial resolution at high scan speeds. Our technique will be an important tool for use in precise characterization of nanomechanical resonator devices.

  15. Quantitative studies of animal colour constancy: using the chicken as model.

    PubMed

    Olsson, Peter; Wilby, David; Kelber, Almut

    2016-05-11

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations. PMID:27170714

  16. The yeast galactose network as a quantitative model for cellular memory

    PubMed Central

    Stockwell, Sarah R.; Landry, Christian R.; Rifkin, Scott A.

    2014-01-01

    Recent experiments have revealed surprising behavior in the yeast galactose (GAL) pathway, one of the preeminent systems for studying gene regulation. Under certain circumstances, yeast cells display memory of their prior nutrient environments. We distinguish two kinds of cellular memory discovered by quantitative investigations of the GAL network and present a conceptual framework for interpreting new experiments and current ideas on GAL memory. Reinduction memory occurs when cells respond transcriptionally to one environment, shut down the response during several generations in a second environment, then respond faster and with less cell-to-cell variation when returned to the first environment. Persistent memory describes a long-term, arguably stable response in which cells adopt a bimodal or unimodal distribution of induction levels depending on their preceding environment. Deep knowledge of how the yeast GAL pathway responds to different sugar environments has enabled rapid progress in uncovering the mechanisms behind GAL memory, which include cytoplasmic inheritance of inducer proteins and positive feedback loops among regulatory genes. This network of genes, long used to study gene regulation, is now emerging as a model system for cellular memory. PMID:25328105

  17. Temperature-dependent turnovers in sex-determination mechanisms: a quantitative model.

    PubMed

    Grossen, Christine; Neuenschwander, Samuel; Perrin, Nicolas

    2011-01-01

    Sex determination is often seen as a dichotomous process: individual sex is assumed to be determined either by genetic (genotypic sex determination, GSD) or by environmental factors (environmental sex determination, ESD), most often temperature (temperature sex determination, TSD). We endorse an alternative view, which sees GSD and TSD as the ends of a continuum. Both effects interact a priori, because temperature can affect gene expression at any step along the sex-determination cascade. We propose to define sex-determination systems at the population- (rather than individual) level, via the proportion of variance in phenotypic sex stemming from genetic versus environmental factors, and we formalize this concept in a quantitative-genetics framework. Sex is seen as a threshold trait underlain by a liability factor, and reaction norms allow modeling interactions between genotypic and temperature effects (seen as the necessary consequences of thermodynamic constraints on the underlying physiological processes). As this formalization shows, temperature changes (due to e.g., climatic changes or range expansions) are expected to provoke turnovers in sex-determination mechanisms, by inducing large-scale sex reversal and thereby sex-ratio selection for alternative sex-determining genes. The frequency of turnovers and prevalence of homomorphic sex chromosomes in cold-blooded vertebrates might thus directly relate to the temperature dependence in sex-determination mechanisms.

  18. Modeling development and quantitative trait mapping reveal independent genetic modules for leaf size and shape.

    PubMed

    Baker, Robert L; Leong, Wen Fung; Brock, Marcus T; Markelz, R J Cody; Covington, Michael F; Devisetty, Upendra K; Edwards, Christine E; Maloof, Julin; Welch, Stephen; Weinig, Cynthia

    2015-10-01

    Improved predictions of fitness and yield may be obtained by characterizing the genetic controls and environmental dependencies of organismal ontogeny. Elucidating the shape of growth curves may reveal novel genetic controls that single-time-point (STP) analyses do not because, in theory, infinite numbers of growth curves can result in the same final measurement. We measured leaf lengths and widths in Brassica rapa recombinant inbred lines (RILs) throughout ontogeny. We modeled leaf growth and allometry as function valued traits (FVT), and examined genetic correlations between these traits and aspects of phenology, physiology, circadian rhythms and fitness. We used RNA-seq to construct a SNP linkage map and mapped trait quantitative trait loci (QTL). We found genetic trade-offs between leaf size and growth rate FVT and uncovered differences in genotypic and QTL correlations involving FVT vs STPs. We identified leaf shape (allometry) as a genetic module independent of length and width and identified selection on FVT parameters of development. Leaf shape is associated with venation features that affect desiccation resistance. The genetic independence of leaf shape from other leaf traits may therefore enable crop optimization in leaf shape without negative effects on traits such as size, growth rate, duration or gas exchange.

  19. Toxicity mechanisms of the food contaminant citrinin: application of a quantitative yeast model.

    PubMed

    Pascual-Ahuir, Amparo; Vanacloig-Pedros, Elena; Proft, Markus

    2014-05-01

    Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifically, oxidative stress responsive pathways via the transcription factors Yap1 and Skn7 are critically implied in the response to citrinin. Additionally, genes in various multidrug resistance transport systems are functionally involved in the resistance to citrinin. Our study identifies the antioxidant defense as a major physiological response in the case of citrinin. In general, our results show that the use of live cell gene expression reporters in yeast are a powerful tool to identify toxicity targets and detoxification mechanisms of a broad range of food contaminants relevant for human nutrition. PMID:24858409

  20. A quantitative model of normal C. elegans embryogenesis and its disruption after stress

    PubMed Central

    Richards, Julia L.; Zacharias, Amanda L.; Walton, Travis; Burdick, Joshua T.; Murray, John Isaac

    2012-01-01

    The invariant lineage of Caenorhabditis elegans has powerful potential for quantifying developmental variability in normal and stressed embryos. Previous studies of division timing by automated lineage tracing suggested that variability in cell cycle timing is low in younger embryos, but manual lineage tracing of specific lineages suggested that variability may increase for later divisions. We developed improved automated lineage tracing methods that allow routine lineage tracing through the last round of embryonic cell divisions and we applied these methods to trace the lineage of 18 wild-type embryos. Cell cycle lengths, division axes and cell positions are remarkably consistent among these embryos at all stages, with only slight increases in variability later in development. The resulting quantitative 4-dimensional model of embryogenesis provides a powerful reference dataset to identify defects in mutants or in embryos that have experienced environmental perturbations. We also traced the lineages of embryos imaged at higher temperatures to quantify the decay in developmental robustness under temperature stress. Developmental variability increases modestly at 25°C compared with 22°C and dramatically at 26°C, and we identify homeotic transformations in a subset of embryos grown at 26°C. The deep lineage tracing methods provide a powerful tool for analysis of normal development, gene expression and mutants and we provide a graphical user interface to allow other researchers to explore the average behavior of arbitrary cells in a reference embryo. PMID:23220655

  1. Toxicity mechanisms of the food contaminant citrinin: application of a quantitative yeast model.

    PubMed

    Pascual-Ahuir, Amparo; Vanacloig-Pedros, Elena; Proft, Markus

    2014-05-22

    Mycotoxins are important food contaminants and a serious threat for human nutrition. However, in many cases the mechanisms of toxicity for this diverse group of metabolites are poorly understood. Here we apply live cell gene expression reporters in yeast as a quantitative model to unravel the cellular defense mechanisms in response to the mycotoxin citrinin. We find that citrinin triggers a fast and dose dependent activation of stress responsive promoters such as GRE2 or SOD2. More specifically, oxidative stress responsive pathways via the transcription factors Yap1 and Skn7 are critically implied in the response to citrinin. Additionally, genes in various multidrug resistance transport systems are functionally involved in the resistance to citrinin. Our study identifies the antioxidant defense as a major physiological response in the case of citrinin. In general, our results show that the use of live cell gene expression reporters in yeast are a powerful tool to identify toxicity targets and detoxification mechanisms of a broad range of food contaminants relevant for human nutrition.

  2. Quantitative constraint-based computational model of tumor-to-stroma coupling via lactate shuttle.

    PubMed

    Capuani, Fabrizio; De Martino, Daniele; Marinari, Enzo; De Martino, Andrea

    2015-07-07

    Cancer cells utilize large amounts of ATP to sustain growth, relying primarily on non-oxidative, fermentative pathways for its production. In many types of cancers this leads, even in the presence of oxygen, to the secretion of carbon equivalents (usually in the form of lactate) in the cell's surroundings, a feature known as the Warburg effect. While the molecular basis of this phenomenon are still to be elucidated, it is clear that the spilling of energy resources contributes to creating a peculiar microenvironment for tumors, possibly characterized by a degree of toxicity. This suggests that mechanisms for recycling the fermentation products (e.g. a lactate shuttle) may be active, effectively inducing a mutually beneficial metabolic coupling between aberrant and non-aberrant cells. Here we analyze this scenario through a large-scale in silico metabolic model of interacting human cells. By going beyond the cell-autonomous description, we show that elementary physico-chemical constraints indeed favor the establishment of such a coupling under very broad conditions. The characterization we obtained by tuning the aberrant cell's demand for ATP, amino-acids and fatty acids and/or the imbalance in nutrient partitioning provides quantitative support to the idea that synergistic multi-cell effects play a central role in cancer sustainment.

  3. Quantitative constraint-based computational model of tumor-to-stroma coupling via lactate shuttle

    PubMed Central

    Capuani, Fabrizio; De Martino, Daniele; Marinari, Enzo; De Martino, Andrea

    2015-01-01

    Cancer cells utilize large amounts of ATP to sustain growth, relying primarily on non-oxidative, fermentative pathways for its production. In many types of cancers this leads, even in the presence of oxygen, to the secretion of carbon equivalents (usually in the form of lactate) in the cell’s surroundings, a feature known as the Warburg effect. While the molecular basis of this phenomenon are still to be elucidated, it is clear that the spilling of energy resources contributes to creating a peculiar microenvironment for tumors, possibly characterized by a degree of toxicity. This suggests that mechanisms for recycling the fermentation products (e.g. a lactate shuttle) may be active, effectively inducing a mutually beneficial metabolic coupling between aberrant and non-aberrant cells. Here we analyze this scenario through a large-scale in silico metabolic model of interacting human cells. By going beyond the cell-autonomous description, we show that elementary physico-chemical constraints indeed favor the establishment of such a coupling under very broad conditions. The characterization we obtained by tuning the aberrant cell’s demand for ATP, amino-acids and fatty acids and/or the imbalance in nutrient partitioning provides quantitative support to the idea that synergistic multi-cell effects play a central role in cancer sustainment. PMID:26149467

  4. Assessing the toxic effects of ethylene glycol ethers using Quantitative Structure Toxicity Relationship models

    SciTech Connect

    Ruiz, Patricia; Mumtaz, Moiz; Gombar, Vijay

    2011-07-15

    Experimental determination of toxicity profiles consumes a great deal of time, money, and other resources. Consequently, businesses, societies, and regulators strive for reliable alternatives such as Quantitative Structure Toxicity Relationship (QSTR) models to fill gaps in toxicity profiles of compounds of concern to human health. The use of glycol ethers and their health effects have recently attracted the attention of international organizations such as the World Health Organization (WHO). The board members of Concise International Chemical Assessment Documents (CICAD) recently identified inadequate testing as well as gaps in toxicity profiles of ethylene glycol mono-n-alkyl ethers (EGEs). The CICAD board requested the ATSDR Computational Toxicology and Methods Development Laboratory to conduct QSTR assessments of certain specific toxicity endpoints for these chemicals. In order to evaluate the potential health effects of EGEs, CICAD proposed a critical QSTR analysis of the mutagenicity, carcinogenicity, and developmental effects of EGEs and other selected chemicals. We report here results of the application of QSTRs to assess rodent carcinogenicity, mutagenicity, and developmental toxicity of four EGEs: 2-methoxyethanol, 2-ethoxyethanol, 2-propoxyethanol, and 2-butoxyethanol and their metabolites. Neither mutagenicity nor carcinogenicity is indicated for the parent compounds, but these compounds are predicted to be developmental toxicants. The predicted toxicity effects were subjected to reverse QSTR (rQSTR) analysis to identify structural attributes that may be the main drivers of the developmental toxicity potential of these compounds.

  5. Quantitative studies of animal colour constancy: using the chicken as model.

    PubMed

    Olsson, Peter; Wilby, David; Kelber, Almut

    2016-05-11

    Colour constancy is the capacity of visual systems to keep colour perception constant despite changes in the illumination spectrum. Colour constancy has been tested extensively in humans and has also been described in many animals. In humans, colour constancy is often studied quantitatively, but besides humans, this has only been done for the goldfish and the honeybee. In this study, we quantified colour constancy in the chicken by training the birds in a colour discrimination task and testing them in changed illumination spectra to find the largest illumination change in which they were able to remain colour-constant. We used the receptor noise limited model for animal colour vision to quantify the illumination changes, and found that colour constancy performance depended on the difference between the colours used in the discrimination task, the training procedure and the time the chickens were allowed to adapt to a new illumination before making a choice. We analysed literature data on goldfish and honeybee colour constancy with the same method and found that chickens can compensate for larger illumination changes than both. We suggest that future studies on colour constancy in non-human animals could use a similar approach to allow for comparison between species and populations.

  6. Quantitative constraint-based computational model of tumor-to-stroma coupling via lactate shuttle.

    PubMed

    Capuani, Fabrizio; De Martino, Daniele; Marinari, Enzo; De Martino, Andrea

    2015-01-01

    Cancer cells utilize large amounts of ATP to sustain growth, relying primarily on non-oxidative, fermentative pathways for its production. In many types of cancers this leads, even in the presence of oxygen, to the secretion of carbon equivalents (usually in the form of lactate) in the cell's surroundings, a feature known as the Warburg effect. While the molecular basis of this phenomenon are still to be elucidated, it is clear that the spilling of energy resources contributes to creating a peculiar microenvironment for tumors, possibly characterized by a degree of toxicity. This suggests that mechanisms for recycling the fermentation products (e.g. a lactate shuttle) may be active, effectively inducing a mutually beneficial metabolic coupling between aberrant and non-aberrant cells. Here we analyze this scenario through a large-scale in silico metabolic model of interacting human cells. By going beyond the cell-autonomous description, we show that elementary physico-chemical constraints indeed favor the establishment of such a coupling under very broad conditions. The characterization we obtained by tuning the aberrant cell's demand for ATP, amino-acids and fatty acids and/or the imbalance in nutrient partitioning provides quantitative support to the idea that synergistic multi-cell effects play a central role in cancer sustainment. PMID:26149467

  7. Quantitative trait locus analysis of symbiotic nitrogen fixation activity in the model legume Lotus japonicus.

    PubMed

    Tominaga, Akiyoshi; Gondo, Takahiro; Akashi, Ryo; Zheng, Shao-Hui; Arima, Susumu; Suzuki, Akihiro

    2012-05-01

    Many legumes form nitrogen-fixing root nodules. An elevation of nitrogen fixation in such legumes would have significant implications for plant growth and biomass production in agriculture. To identify the genetic basis for the regulation of nitrogen fixation, quantitative trait locus (QTL) analysis was conducted with recombinant inbred lines derived from the cross Miyakojima MG-20 × Gifu B-129 in the model legume Lotus japonicus. This population was inoculated with Mesorhizobium loti MAFF303099 and grown for 14 days in pods containing vermiculite. Phenotypic data were collected for acetylene reduction activity (ARA) per plant (ARA/P), ARA per nodule weight (ARA/NW), ARA per nodule number (ARA/NN), NN per plant, NW per plant, stem length (SL), SL without inoculation (SLbac-), shoot dry weight without inoculation (SWbac-), root length without inoculation (RLbac-), and root dry weight (RWbac-), and finally 34 QTLs were identified. ARA/P, ARA/NN, NW, and SL showed strong correlations and QTL co-localization, suggesting that several plant characteristics important for symbiotic nitrogen fixation are controlled by the same locus. QTLs for ARA/P, ARA/NN, NW, and SL, co-localized around marker TM0832 on chromosome 4, were also co-localized with previously reported QTLs for seed mass. This is the first report of QTL analysis for symbiotic nitrogen fixation activity traits.

  8. Quantitative trait locus analysis of multiple agronomic traits in the model legume Lotus japonicus.

    PubMed

    Gondo, Takahiro; Sato, Shusei; Okumura, Kenji; Tabata, Satoshi; Akashi, Ryo; Isobe, Sachiko

    2007-07-01

    The first quantitative trait locus (QTL) analysis of multiple agronomic traits in the model legume Lotus japonicus was performed with a population of recombinant inbred lines derived from Miyakojima MG-20 x Gifu B-129. Thirteen agronomic traits were evaluated in 2004 and 2005: traits of vegetative parts (plant height, stem thickness, leaf length, leaf width, plant regrowth, plant shape, and stem color), flowering traits (flowering time and degree), and pod and seed traits (pod length, pod width, seeds per pod, and seed mass). A total of 40 QTLs were detected that explained 5%-69% of total variation. The QTL that explained the most variation was that for stem color, which was detected in the same region of chromosome 2 in both years. Some QTLs were colocated, especially those for pod and seed traits. Seed mass QTLs were located at 5 locations that mapped to the corresponding genomic positions of equivalent QTLs in soybean, pea, chickpea, and mung bean. This study provides fundamental information for breeding of agronomically important legume crops.

  9. Quantitation and pharmacokinetic modeling of therapeutic antibody quality attributes in human studies

    PubMed Central

    Li, Yinyin; Monine, Michael; Huang, Yu; Swann, Patrick; Nestorov, Ivan; Lyubarskaya, Yelena

    2016-01-01

    ABSTRACT A thorough understanding of drug metabolism and disposition can aid in the assessment of efficacy and safety. However, analytical methods used in pharmacokinetics (PK) studies of protein therapeutics are usually based on ELISA, and therefore can provide a limited perspective on the quality of the drug in concentration measurements. Individual post-translational modifications (PTMs) of protein therapeutics are rarely considered for PK analysis, partly because it is technically difficult to recover and quantify individual protein variants from biological fluids. Meanwhile, PTMs may be directly linked to variations in drug efficacy and safety, and therefore understanding of clearance and metabolism of biopharmaceutical protein variants during clinical studies is an important consideration. To address such challenges, we developed an affinity-purification procedure followed by peptide mapping with mass spectrometric detection, which can profile multiple quality attributes of therapeutic antibodies recovered from patient sera. The obtained data enable quantitative modeling, which allows for simulation of the PK of different individual PTMs or attribute levels in vivo and thus facilitate the assessment of quality attributes impact in vivo. Such information can contribute to the product quality attribute risk assessment during manufacturing process development and inform appropriate process control strategy. PMID:27216574

  10. Quantitative Profiling of Brain Lipid Raft Proteome in a Mouse Model of Fragile X Syndrome

    PubMed Central

    Kalinowska, Magdalena; Castillo, Catherine; Francesconi, Anna

    2015-01-01

    Fragile X Syndrome, a leading cause of inherited intellectual disability and autism, arises from transcriptional silencing of the FMR1 gene encoding an RNA-binding protein, Fragile X Mental Retardation Protein (FMRP). FMRP can regulate the expression of approximately 4% of brain transcripts through its role in regulation of mRNA transport, stability and translation, thus providing a molecular rationale for its potential pleiotropic effects on neuronal and brain circuitry function. Several intracellular signaling pathways are dysregulated in the absence of FMRP suggesting that cellular deficits may be broad and could result in homeostatic changes. Lipid rafts are specialized regions of the plasma membrane, enriched in cholesterol and glycosphingolipids, involved in regulation of intracellular signaling. Among transcripts targeted by FMRP, a subset encodes proteins involved in lipid biosynthesis and homeostasis, dysregulation of which could affect the integrity and function of lipid rafts. Using a quantitative mass spectrometry-based approach we analyzed the lipid raft proteome of Fmr1 knockout mice, an animal model of Fragile X syndrome, and identified candidate proteins that are differentially represented in Fmr1 knockout mice lipid rafts. Furthermore, network analysis of these candidate proteins reveals connectivity between them and predicts functional connectivity with genes encoding components of myelin sheath, axonal processes and growth cones. Our findings provide insight to aid identification of molecular and cellular dysfunctions arising from Fmr1 silencing and for uncovering shared pathologies between Fragile X syndrome and other autism spectrum disorders. PMID:25849048

  11. Modeling development and quantitative trait mapping reveal independent genetic modules for leaf size and shape.

    PubMed

    Baker, Robert L; Leong, Wen Fung; Brock, Marcus T; Markelz, R J Cody; Covington, Michael F; Devisetty, Upendra K; Edwards, Christine E; Maloof, Julin; Welch, Stephen; Weinig, Cynthia

    2015-10-01

    Improved predictions of fitness and yield may be obtained by characterizing the genetic controls and environmental dependencies of organismal ontogeny. Elucidating the shape of growth curves may reveal novel genetic controls that single-time-point (STP) analyses do not because, in theory, infinite numbers of growth curves can result in the same final measurement. We measured leaf lengths and widths in Brassica rapa recombinant inbred lines (RILs) throughout ontogeny. We modeled leaf growth and allometry as function valued traits (FVT), and examined genetic correlations between these traits and aspects of phenology, physiology, circadian rhythms and fitness. We used RNA-seq to construct a SNP linkage map and mapped trait quantitative trait loci (QTL). We found genetic trade-offs between leaf size and growth rate FVT and uncovered differences in genotypic and QTL correlations involving FVT vs STPs. We identified leaf shape (allometry) as a genetic module independent of length and width and identified selection on FVT parameters of development. Leaf shape is associated with venation features that affect desiccation resistance. The genetic independence of leaf shape from other leaf traits may therefore enable crop optimization in leaf shape without negative effects on traits such as size, growth rate, duration or gas exchange. PMID:26083847

  12. A quantitative microbiological exposure assessment model for Bacillus cereus in REPFEDs.

    PubMed

    Daelman, Jeff; Membré, Jeanne-Marie; Jacxsens, Liesbeth; Vermeulen, An; Devlieghere, Frank; Uyttendaele, Mieke

    2013-09-16

    One of the pathogens of concern in refrigerated and processed foods of extended durability (REPFED) is psychrotrophic Bacillus cereus, because of its ability to survive pasteurisation and grow at low temperatures. In this study a quantitative microbiological exposure assessment (QMEA) of psychrotrophic B. cereus in REPFEDs is presented. The goal is to quantify (i) the prevalence and concentration of B. cereus during production and shelf life, (ii) the number of packages with potential emetic toxin formation and (iii) the impact of different processing steps and consumer behaviour on the exposure to B. cereus from REPFEDs. The QMEA comprises the entire production and distribution process, from raw materials over pasteurisation and up to the moment it is consumed or discarded. To model this process the modular process risk model (MPRM) was used (Nauta, 2002). The product life was divided into nine modules, each module corresponding to a basic process: (1) raw material contamination, (2) cross contamination during handling, (3) inactivation during preparation, (4) growth during intermediate storage, (5) partitioning of batches in portions, (6) mixing portions to create the product, (7) recontamination during assembly and packaging, (8) inactivation during pasteurisation and (9) growth during shelf life. Each of the modules was modelled and built using a combination of newly gathered and literature data, predictive models and expert opinions. Units (batch/portion/package) with a B. cereus concentration of 10(5)CFU/g or more were considered 'risky' units. Results show that the main drivers of variability and uncertainty are consumer behaviour, strain variability and modelling error. The prevalence of B. cereus in the final products is estimated at 48.6% (±0.01%) and the number of packs with too high B. cereus counts at the moment of consumption is estimated at 4750 packs per million (0.48%). Cold storage at retail and consumer level is vital in limiting the exposure

  13. Qualitative and quantitative structure-activity relationship modelling for predicting blood-brain barrier permeability of structurally diverse chemicals.

    PubMed

    Gupta, S; Basant, N; Singh, K P

    2015-01-01

    In this study, structure-activity relationship (SAR) models have been established for qualitative and quantitative prediction of the blood-brain barrier (BBB) permeability of chemicals. The structural diversity of the chemicals and nonlinear structure in the data were tested. The predictive and generalization ability of the developed SAR models were tested through internal and external validation procedures. In complete data, the QSAR models rendered ternary classification accuracy of >98.15%, while the quantitative SAR models yielded correlation (r(2)) of >0.926 between the measured and the predicted BBB permeability values with the mean squared error (MSE) <0.045. The proposed models were also applied to an external new in vitro data and yielded classification accuracy of >82.7% and r(2) > 0.905 (MSE < 0.019). The sensitivity analysis revealed that topological polar surface area (TPSA) has the highest effect in qualitative and quantitative models for predicting the BBB permeability of chemicals. Moreover, these models showed predictive performance superior to those reported earlier in the literature. This demonstrates the appropriateness of the developed SAR models to reliably predict the BBB permeability of new chemicals, which can be used for initial screening of the molecules in the drug development process.

  14. Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models

    NASA Astrophysics Data System (ADS)

    Ramli, Huda Mohd.; Esler, J. Gavin

    2016-07-01

    A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.

  15. Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions

    PubMed Central

    Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu

    2014-01-01

    Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics. PMID:24769917

  16. Quantitative modeling of reflected ultrasonic bounded beams and a new estimate of the Schoch shift.

    PubMed

    Bouzidi, Youcef; Schmitt, Douglas R

    2008-12-01

    The wavefields of bounded acoustic beams and pulses reflected from water-loaded plates are fully modeled with the phase advance technique. The wavefield produced at the source is propagated at any incidence angle using phase shift modeling that incorporates the full analytic solution for the acoustic reflectivity at the interface. This approach provides for the ready visualization of both the stationary monofrequency beam wavefield and animation of the temporally bounded pulse. The model images are reminiscent of the classic Schlieren photographs that first illustrated the nonspecular behavior of the reflected beams incident near critical angles. Various phenomena such as the lateral displacement and the null zone at the Rayleigh critical angle are recreated. A new approximation for this shift agrees well with that of the peak energy of the reflected beam. Similar effects are observed during the reflection of a bounded pulse. Although more computationally costly than existing analytic approximations, the phase advance technique can facilitate the interpretation of reflectivity measurements obtained in laboratory experiments. In particular, the full visualization allows for a better understanding of the behavior of reflected waves at any angle of incidence. PMID:19126490

  17. Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions

    NASA Astrophysics Data System (ADS)

    Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu

    2014-04-01

    Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics.

  18. Cortical neuron activation induced by electromagnetic stimulation: a quantitative analysis via modelling and simulation.

    PubMed

    Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping

    2016-02-01

    Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field. PMID:26719168

  19. Automatic and Quantitative Measurement of Collagen Gel Contraction Using Model-Guided Segmentation.

    PubMed

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R; Zhao, Chunfeng; Amadio, Peter C; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behaviors and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model (DCM) which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods. PMID:24092954

  20. Gas chromatographic quantitative analysis of methanol in wine: operative conditions, optimization and calibration model choice.

    PubMed

    Caruso, Rosario; Gambino, Grazia Laura; Scordino, Monica; Sabatino, Leonardo; Traulo, Pasqualino; Gagliano, Giacomo

    2011-12-01

    The influence of the wine distillation process on methanol content has been determined by quantitative analysis using gas chromatographic flame ionization (GC-FID) detection. A comparative study between direct injection of diluted wine and injection of distilled wine was performed. The distillation process does not affect methanol quantification in wines in proportions higher than 10%. While quantification performed on distilled samples gives more reliable results, a screening method for wine injection after a 1:5 water dilution could be employed. The proposed technique was found to be a compromise between the time consuming distillation process and direct wine injection. In the studied calibration range, the stability of the volatile compounds in the reference solution is concentration-dependent. The stability is higher in the less concentrated reference solution. To shorten the operation time, a stronger temperature ramp and carrier flow rate was employed. With these conditions, helium consumption and column thermal stress were increased. However, detection limits, calibration limits, and analytical method performances are not affected substantially by changing from normal to forced GC conditions. Statistical data evaluation were made using both ordinary (OLS) and bivariate least squares (BLS) calibration models. Further confirmation was obtained that limit of detection (LOD) values, calculated according to the 3sigma approach, are lower than the respective Hubaux-Vos (H-V) calculation method. H-V LOD depends upon background noise, calibration parameters and the number of reference standard solutions employed in producing the calibration curve. These remarks are confirmed by both calibration models used. PMID:22312744

  1. A Quantitative Study of Faculty Perceptions and Attitudes on Asynchronous Virtual Teamwork Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Wolusky, G. Anthony

    2016-01-01

    This quantitative study used a web-based questionnaire to assess the attitudes and perceptions of online and hybrid faculty towards student-centered asynchronous virtual teamwork (AVT) using the technology acceptance model (TAM) of Davis (1989). AVT is online student participation in a team approach to problem-solving culminating in a written…

  2. MODIS volcanic ash retrievals vs FALL3D transport model: a quantitative comparison

    NASA Astrophysics Data System (ADS)

    Corradini, S.; Merucci, L.; Folch, A.

    2010-12-01

    Satellite retrievals and transport models represents the key tools to monitor the volcanic clouds evolution. Because of the harming effects of fine ash particles on aircrafts, the real-time tracking and forecasting of volcanic clouds is key for aviation safety. Together with the security reasons also the economical consequences of a disruption of airports must be taken into account. The airport closures due to the recent Icelandic Eyjafjöll eruption caused millions of passengers to be stranded not only in Europe, but across the world. IATA (the International Air Transport Association) estimates that the worldwide airline industry has lost a total of about 2.5 billion of Euro during the disruption. Both security and economical issues require reliable and robust ash cloud retrievals and trajectory forecasting. The intercomparison between remote sensing and modeling is required to assure precise and reliable volcanic ash products. In this work we perform a quantitative comparison between Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of volcanic ash cloud mass and Aerosol Optical Depth (AOD) with the FALL3D ash dispersal model. MODIS, aboard the NASA-Terra and NASA-Aqua polar satellites, is a multispectral instrument with 36 spectral bands operating in the VIS-TIR spectral range and spatial resolution varying between 250 and 1000 m at nadir. The MODIS channels centered around 11 and 12 micron have been used for the ash retrievals through the Brightness Temperature Difference algorithm and MODTRAN simulations. FALL3D is a 3-D time-dependent Eulerian model for the transport and deposition of volcanic particles that outputs, among other variables, cloud column mass and AOD. Three MODIS images collected the October 28, 29 and 30 on Mt. Etna volcano during the 2002 eruption have been considered as test cases. The results show a general good agreement between the retrieved and the modeled volcanic clouds in the first 300 km from the vents. Even if the

  3. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors

    PubMed Central

    Shuryak, Igor; Dadachova, Ekaterina

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental

  4. Quantitative Modeling of Microbial Population Responses to Chronic Irradiation Combined with Other Stressors.

    PubMed

    Shuryak, Igor; Dadachova, Ekaterina

    2016-01-01

    Microbial population responses to combined effects of chronic irradiation and other stressors (chemical contaminants, other sub-optimal conditions) are important for ecosystem functioning and bioremediation in radionuclide-contaminated areas. Quantitative mathematical modeling can improve our understanding of these phenomena. To identify general patterns of microbial responses to multiple stressors in radioactive environments, we analyzed three data sets on: (1) bacteria isolated from soil contaminated by nuclear waste at the Hanford site (USA); (2) fungi isolated from the Chernobyl nuclear-power plant (Ukraine) buildings after the accident; (3) yeast subjected to continuous γ-irradiation in the laboratory, where radiation dose rate and cell removal rate were independently varied. We applied generalized linear mixed-effects models to describe the first two data sets, whereas the third data set was amenable to mechanistic modeling using differential equations. Machine learning and information-theoretic approaches were used to select the best-supported formalism(s) among biologically-plausible alternatives. Our analysis suggests the following: (1) Both radionuclides and co-occurring chemical contaminants (e.g. NO2) are important for explaining microbial responses to radioactive contamination. (2) Radionuclides may produce non-monotonic dose responses: stimulation of microbial growth at low concentrations vs. inhibition at higher ones. (3) The extinction-defining critical radiation dose rate is dramatically lowered by additional stressors. (4) Reproduction suppression by radiation can be more important for determining the critical dose rate, than radiation-induced cell mortality. In conclusion, the modeling approaches used here on three diverse data sets provide insight into explaining and predicting multi-stressor effects on microbial communities: (1) the most severe effects (e.g. extinction) on microbial populations may occur when unfavorable environmental

  5. Quantitative Hydrogeological Framework Interpretations from Modeling Helicopter Electromagnetic Survey Data, Nebraska Panhandle

    NASA Astrophysics Data System (ADS)

    Abraham, J. D.; Ball, L. B.; Bedrosian, P. A.; Cannia, J. C.; Deszcz-Pan, M.; Minsley, B. J.; Peterson, S. M.; Smith, B. D.

    2009-12-01

    The need for allocation and management of water resources within the state of Nebraska has created a demand for innovative approaches to data collection for development of hydrogeologic frameworks to be used for 2D and 3D groundwater models. In 2008, the USGS in cooperation with the North Platte Natural Resources District, the South Platte Natural Resources District, and the University of Nebraska Conservation and Survey Division began using frequency domain helicopter electromagnetic (HEM) surveys to map selected sections of the Nebraska Panhandle. The surveys took place in selected sections of the North Platte River valley, Lodgepole Creek, and portions of the adjacent tablelands. The objective of the surveys is to map the aquifers of the area to improve understanding of the groundwater-surface water relationships and develop better hydrogeologic frameworks used in making more accurate 3D groundwater models of the area. For the HEM method to have an impact in a groundwater model at the basin scale, hydrostratigraphic units need to have detectable physical property (electrical resistivity) contrasts. When these contrasts exist within the study area and they are detectable from an airborne platform, large areas can be surveyed to rapidly generate 2D and 3D maps and models of 3D hydrogeologic features. To make the geophysical data useful to multidimensional groundwater models, numerical inversion is necessary to produce a depth-dependent physical property data set reflecting hydrogeologic features. These maps and depth images of electrical resistivity in themselves are not useful for the hydrogeologist. They need to be turned into maps and depth images of the hydrostratigraphic units and hydrogeologic features. Through a process of numerical imaging, inversion, sensitivity analysis, geological ground truthing (boreholes), geological interpretation, hydrogeologic features are characterized. Resistivity depth sections produced from this process are used to pick

  6. Quantitative Structure-Property Relationship (QSPR) Modeling of Drug-Loaded Polymeric Micelles via Genetic Function Approximation

    PubMed Central

    Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan

    2015-01-01

    Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments. PMID:25780923

  7. Quantitative Prediction of Drug–Drug Interactions Involving Inhibitory Metabolites in Drug Development: How Can Physiologically Based Pharmacokinetic Modeling Help?

    PubMed Central

    Chen, Y; Mao, J; Lin, J; Yu, H; Peters, S; Shebley, M

    2016-01-01

    This subteam under the Drug Metabolism Leadership Group (Innovation and Quality Consortium) investigated the quantitative role of circulating inhibitory metabolites in drug–drug interactions using physiologically based pharmacokinetic (PBPK) modeling. Three drugs with major circulating inhibitory metabolites (amiodarone, gemfibrozil, and sertraline) were systematically evaluated in addition to the literature review of recent examples. The application of PBPK modeling in drug interactions by inhibitory parent–metabolite pairs is described and guidance on strategic application is provided. PMID:27642087

  8. A bibliography of terrain modeling (geomorphometry), the quantitative representation of topography: supplement 4.0

    USGS Publications Warehouse

    Pike, Richard J.

    2002-01-01

    Terrain modeling, the practice of ground-surface quantification, is an amalgam of Earth science, mathematics, engineering, and computer science. The discipline is known variously as geomorphometry (or simply morphometry), terrain analysis, and quantitative geomorphology. It continues to grow through myriad applications to hydrology, geohazards mapping, tectonics, sea-floor and planetary exploration, and other fields. Dating nominally to the co-founders of academic geography, Alexander von Humboldt (1808, 1817) and Carl Ritter (1826, 1828), the field was revolutionized late in the 20th Century by the computer manipulation of spatial arrays of terrain heights, or digital elevation models (DEMs), which can quantify and portray ground-surface form over large areas (Maune, 2001). Morphometric procedures are implemented routinely by commercial geographic information systems (GIS) as well as specialized software (Harvey and Eash, 1996; Köthe and others, 1996; ESRI, 1997; Drzewiecki et al., 1999; Dikau and Saurer, 1999; Djokic and Maidment, 2000; Wilson and Gallant, 2000; Breuer, 2001; Guth, 2001; Eastman, 2002). The new Earth Surface edition of the Journal of Geophysical Research, specializing in surficial processes, is the latest of many publication venues for terrain modeling. This is the fourth update of a bibliography and introduction to terrain modeling (Pike, 1993, 1995, 1996, 1999) designed to collect the diverse, scattered literature on surface measurement as a resource for the research community. The use of DEMs in science and technology continues to accelerate and diversify (Pike, 2000a). New work appears so frequently that a sampling must suffice to represent the vast literature. This report adds 1636 entries to the 4374 in the four earlier publications1. Forty-eight additional entries correct dead Internet links and other errors found in the prior listings. Chronicling the history of terrain modeling, many entries in this report predate the 1999 supplement

  9. Evaluation of Quantitative Precipitation Estimations (QPE) and Hydrological Modelling in IFloodS Focal Basins