Modeling the Effect of Polychromatic Light in Quantitative Absorbance Spectroscopy
ERIC Educational Resources Information Center
Smith, Rachel; Cantrell, Kevin
2007-01-01
Laboratory experiment is conducted to give the students practical experience with the principles of electronic absorbance spectroscopy. This straightforward approach creates a powerful tool for exploring many of the aspects of quantitative absorbance spectroscopy.
Testing process predictions of models of risky choice: a quantitative model comparison approach
Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard
2013-01-01
This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472
Highlights from High Energy Neutrino Experiments at CERN
NASA Astrophysics Data System (ADS)
Schlatter, W.-D.
2015-07-01
Experiments with high energy neutrino beams at CERN provided early quantitative tests of the Standard Model. This article describes results from studies of the nucleon quark structure and of the weak current, together with the precise measurement of the weak mixing angle. These results have established a new quality for tests of the electroweak model. In addition, the measurements of the nucleon structure functions in deep inelastic neutrino scattering allowed first quantitative tests of QCD.
NASA Astrophysics Data System (ADS)
Reineker, P.; Kenkre, V. M.; Kühne, R.
1981-08-01
A quantitative comparison of a simple theoretical prediction for the drift mobility of photo-electrons in organic molecular crystals, calculated within the model of the coupled band-like and hopping motion, with experiments in napthalene of Schein et al. and Karl et al. is given.
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.
A quantitative model of optimal data selection in Wason's selection task.
Hattori, Masasi
2002-10-01
The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.
Kessner, Darren; Novembre, John
2015-01-01
Evolve and resequence studies combine artificial selection experiments with massively parallel sequencing technology to study the genetic basis for complex traits. In these experiments, individuals are selected for extreme values of a trait, causing alleles at quantitative trait loci (QTL) to increase or decrease in frequency in the experimental population. We present a new analysis of the power of artificial selection experiments to detect and localize quantitative trait loci. This analysis uses a simulation framework that explicitly models whole genomes of individuals, quantitative traits, and selection based on individual trait values. We find that explicitly modeling QTL provides qualitatively different insights than considering independent loci with constant selection coefficients. Specifically, we observe how interference between QTL under selection affects the trajectories and lengthens the fixation times of selected alleles. We also show that a substantial portion of the genetic variance of the trait (50–100%) can be explained by detected QTL in as little as 20 generations of selection, depending on the trait architecture and experimental design. Furthermore, we show that power depends crucially on the opportunity for recombination during the experiment. Finally, we show that an increase in power is obtained by leveraging founder haplotype information to obtain allele frequency estimates. PMID:25672748
Electromagnetic braking: A simple quantitative model
NASA Astrophysics Data System (ADS)
Levin, Yan; da Silveira, Fernando L.; Rizzato, Felipe B.
2006-09-01
A calculation is presented that quantitatively accounts for the terminal velocity of a cylindrical magnet falling through a long copper or aluminum pipe. The experiment and the theory are a dramatic illustration of Faraday's and Lenz's laws.
The College Mathematics Experience and Changes in Majors: A Structural Model Analysis.
ERIC Educational Resources Information Center
Whiteley, Meredith A.; Fenske, Robert H.
1990-01-01
Testing of a structural equation model with college mathematics experience as the focal variable in 745 students' final decisions concerning major or dropping out over 4 years of college yielded separate model estimates for 3 fields: scientific/technical, quantitative business, and business management majors. (Author/MSE)
Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús
2009-01-01
Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins. PMID:19181660
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Mid-Frequency Reverberation Measurements with Full Companion Environmental Support
2014-12-30
acoustic modeling is based on measured stratification and observed wave amplitudes on the New Jersey shelf during the SWARM experiment.3 Ray tracing is...wave model then gives quantitative results for the clutter. 2. Swarm NLIW model and ray tracing Nonlinear internal waves are very common on the...receiver in order to give quantitative clutter to reverberation. To picture the mechanism, a set of rays was launched from a source at range zero and
A Didactic Experiment and Model of a Flat-Plate Solar Collector
ERIC Educational Resources Information Center
Gallitto, Aurelio Agliolo; Fiordilino, Emilio
2011-01-01
We report on an experiment performed with a home-made flat-plate solar collector, carried out together with high-school students. To explain the experimental results, we propose a model that describes the heating process of the solar collector. The model accounts quantitatively for the experimental data. We suggest that solar-energy topics should…
A plausible and consistent model is developed to obtain a quantitative description of the gradual disappearance of hexavalent chromium (Cr(VI)) from groundwater in a small-scale field tracer test and in batch kinetic experiments using aquifer sediments under similar chemical cond...
Parallel labeling experiments for pathway elucidation and (13)C metabolic flux analysis.
Antoniewicz, Maciek R
2015-12-01
Metabolic pathway models provide the foundation for quantitative studies of cellular physiology through the measurement of intracellular metabolic fluxes. For model organisms metabolic models are well established, with many manually curated genome-scale model reconstructions, gene knockout studies and stable-isotope tracing studies. However, for non-model organisms a similar level of knowledge is often lacking. Compartmentation of cellular metabolism in eukaryotic systems also presents significant challenges for quantitative (13)C-metabolic flux analysis ((13)C-MFA). Recently, innovative (13)C-MFA approaches have been developed based on parallel labeling experiments, the use of multiple isotopic tracers and integrated data analysis, that allow more rigorous validation of pathway models and improved quantification of metabolic fluxes. Applications of these approaches open new research directions in metabolic engineering, biotechnology and medicine. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zavos, Helena M.S.; Freeman, Daniel; Haworth, Claire M. A.; McGuire, Philip; Plomin, Robert; Cardno, Alastair G.; Ronald, Angelica
2014-01-01
Context The onset of psychosis is usually preceded by psychotic experiences, but little is known about their causes. The present study investigated the degree of genetic and environmental influences on specific psychotic experiences, assessed dimensionally, in adolescence in the community and in individuals with many, frequent experiences (defined using quantitative cut-offs). The degree of overlap in etiological influences between specific psychotic experiences was also investigated Objective Investigate degree of genetic and environmental influences on specific psychotic experiences, assessed dimensionally, in adolescence in the community and in individuals having many, frequent experiences (defined using quantitative cut-offs). Test degree of overlap in etiological influences between specific psychotic experiences. Design Classic twin design. Structural equation model-fitting. Univariate and bivariate twin models, liability threshold models, DeFries-Fulker extremes analysis and the Cherny Method. Setting Representative community sample of twins from England and Wales. Participants 5059 adolescent twin pairs (Mean age: 16.31 yrs, SD: 0.68 yrs). Main outcome measure Psychotic experiences assessed as quantitative traits (self-rated paranoia, hallucinations, cognitive disorganization, grandiosity, anhedonia; parent-rated negative symptoms). Results Genetic influences were apparent for all psychotic experiences (15-59%) with modest shared environment for hallucinations and negative symptoms (17-24%) and significant nonshared environment (49-64% for the self-rated scales, 17% for Parent-rated Negative Symptoms). Three different empirical approaches converged to suggest that the etiology in extreme groups (most extreme-scoring 5%, 10% and 15%) did not differ significantly from that of the whole distribution. There was no linear change in the heritability across the distribution of psychotic experiences, with the exception of a modest increase in heritability for increasing severity of parent-rated negative symptoms. Of the psychotic experiences that showed covariation, this appeared to be due to shared genetic influences (bivariate heritabilities = .54-.71). Conclusions and Relevance These findings are consistent with the concept of a psychosis continuum, suggesting that the same genetic and environmental factors influence both extreme, frequent psychotic experiences and milder, less frequent manifestations in adolescents. Individual psychotic experiences in adolescence, assessed quantitatively, have lower heritability estimates and higher estimates of nonshared environment than those for the liability to schizophrenia. Heritability varies by type of psychotic experience, being highest for paranoia and parent-rated negative symptoms, and lowest for hallucinations. PMID:25075799
Growth of wormlike micelles in nonionic surfactant solutions: Quantitative theory vs. experiment.
Danov, Krassimir D; Kralchevsky, Peter A; Stoyanov, Simeon D; Cook, Joanne L; Stott, Ian P; Pelan, Eddie G
2018-06-01
Despite the considerable advances of molecular-thermodynamic theory of micelle growth, agreement between theory and experiment has been achieved only in isolated cases. A general theory that can provide self-consistent quantitative description of the growth of wormlike micelles in mixed surfactant solutions, including the experimentally observed high peaks in viscosity and aggregation number, is still missing. As a step toward the creation of such theory, here we consider the simplest system - nonionic wormlike surfactant micelles from polyoxyethylene alkyl ethers, C i E j . Our goal is to construct a molecular-thermodynamic model that is in agreement with the available experimental data. For this goal, we systematized data for the micelle mean mass aggregation number, from which the micelle growth parameter was determined at various temperatures. None of the available models can give a quantitative description of these data. We constructed a new model, which is based on theoretical expressions for the interfacial-tension, headgroup-steric and chain-conformation components of micelle free energy, along with appropriate expressions for the parameters of the model, including their temperature and curvature dependencies. Special attention was paid to the surfactant chain-conformation free energy, for which a new more general formula was derived. As a result, relatively simple theoretical expressions are obtained. All parameters that enter these expressions are known, which facilitates the theoretical modeling of micelle growth for various nonionic surfactants in excellent agreement with the experiment. The constructed model can serve as a basis that can be further upgraded to obtain quantitative description of micelle growth in more complicated systems, including binary and ternary mixtures of nonionic, ionic and zwitterionic surfactants, which determines the viscosity and stability of various formulations in personal-care and house-hold detergency. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Matthews, Kelly E.; Adams, Peter; Goos, Merrilyn
2016-07-01
Application of mathematical and statistical thinking and reasoning, typically referred to as quantitative skills, is essential for university bioscience students. First, this study developed an assessment task intended to gauge graduating students' quantitative skills. The Quantitative Skills Assessment of Science Students (QSASS) was the result, which examined 10 mathematical and statistical sub-topics. Second, the study established an evidential baseline of students' quantitative skills performance and confidence levels by piloting the QSASS with 187 final-year biosciences students at a research-intensive university. The study is framed within the planned-enacted-experienced curriculum model and contributes to science reform efforts focused on enhancing the quantitative skills of university graduates, particularly in the biosciences. The results found, on average, weak performance and low confidence on the QSASS, suggesting divergence between academics' intentions and students' experiences of learning quantitative skills. Implications for curriculum design and future studies are discussed.
Quantitative analysis of intra-Golgi transport shows intercisternal exchange for all cargo
Dmitrieff, Serge; Rao, Madan; Sens, Pierre
2013-01-01
The mechanisms controlling the transport of proteins through the Golgi stack of mammalian and plant cells is the subject of intense debate, with two models, cisternal progression and intercisternal exchange, emerging as major contenders. A variety of transport experiments have claimed support for each of these models. We reevaluate these experiments using a single quantitative coarse-grained framework of intra-Golgi transport that accounts for both transport models and their many variants. Our analysis makes a definitive case for the existence of intercisternal exchange both for small membrane proteins and large protein complexes––this implies that membrane structures larger than the typical protein-coated vesicles must be involved in transport. Notwithstanding, we find that current observations on protein transport cannot rule out cisternal progression as contributing significantly to the transport process. To discriminate between the different models of intra-Golgi transport, we suggest experiments and an analysis based on our extended theoretical framework that compare the dynamics of transiting and resident proteins. PMID:24019488
Wittmann, Dominik M; Krumsiek, Jan; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Klamt, Steffen; Theis, Fabian J
2009-01-01
Background The understanding of regulatory and signaling networks has long been a core objective in Systems Biology. Knowledge about these networks is mainly of qualitative nature, which allows the construction of Boolean models, where the state of a component is either 'off' or 'on'. While often able to capture the essential behavior of a network, these models can never reproduce detailed time courses of concentration levels. Nowadays however, experiments yield more and more quantitative data. An obvious question therefore is how qualitative models can be used to explain and predict the outcome of these experiments. Results In this contribution we present a canonical way of transforming Boolean into continuous models, where the use of multivariate polynomial interpolation allows transformation of logic operations into a system of ordinary differential equations (ODE). The method is standardized and can readily be applied to large networks. Other, more limited approaches to this task are briefly reviewed and compared. Moreover, we discuss and generalize existing theoretical results on the relation between Boolean and continuous models. As a test case a logical model is transformed into an extensive continuous ODE model describing the activation of T-cells. We discuss how parameters for this model can be determined such that quantitative experimental results are explained and predicted, including time-courses for multiple ligand concentrations and binding affinities of different ligands. This shows that from the continuous model we may obtain biological insights not evident from the discrete one. Conclusion The presented approach will facilitate the interaction between modeling and experiments. Moreover, it provides a straightforward way to apply quantitative analysis methods to qualitatively described systems. PMID:19785753
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
Magnetically launched flyer plate technique for probing electrical conductivity of compressed copper
NASA Astrophysics Data System (ADS)
Cochrane, K. R.; Lemke, R. W.; Riford, Z.; Carpenter, J. H.
2016-03-01
The electrical conductivity of materials under extremes of temperature and pressure is of crucial importance for a wide variety of phenomena, including planetary modeling, inertial confinement fusion, and pulsed power based dynamic materials experiments. There is a dearth of experimental techniques and data for highly compressed materials, even at known states such as along the principal isentrope and Hugoniot, where many pulsed power experiments occur. We present a method for developing, calibrating, and validating material conductivity models as used in magnetohydrodynamic (MHD) simulations. The difficulty in calibrating a conductivity model is in knowing where the model should be modified. Our method isolates those regions that will have an impact. It also quantitatively prioritizes which regions will have the most beneficial impact. Finally, it tracks the quantitative improvements to the conductivity model during each incremental adjustment. In this paper, we use an experiment on Sandia National Laboratories Z-machine to isentropically launch multiple flyer plates and, with the MHD code ALEGRA and the optimization code DAKOTA, calibrated the conductivity such that we matched an experimental figure of merit to +/-1%.
The MODE family of facility class experiments
NASA Technical Reports Server (NTRS)
Miller, David W.
1992-01-01
The objective of the Middeck 0-gravity Dynamics Experiment (MODE) is to characterize fundamental 0-g slosh behavior and obtain quantitative data on slosh force and spacecraft response for correlation of the analytical model. The topics are presented in viewgraph form and include the following: space results; STA objectives, requirements, and approach; comparison of ground to orbital data for the baseline configuration; conclusions of orbital testing; flight experiment resources; Middeck Active Control Experiment (MACE); MACE 1-G and 0-G models; and future efforts.
ERIC Educational Resources Information Center
Pilten, Gulhiz
2016-01-01
The purpose of the present research is investigating the effects of reciprocal teaching in comprehending expository texts. The research was designed with mixed method. The quantitative dimension of the present research was designed in accordance with pre-test-post-test control group experiment model. The quantitative dimension of the present…
ERIC Educational Resources Information Center
Sun, Yan; Strobel, Johannes; Newby, Timothy J.
2017-01-01
Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…
Deployment of e-health services - a business model engineering strategy.
Kijl, Björn; Nieuwenhuis, Lambert J M; Huis in 't Veld, Rianne M H A; Hermens, Hermie J; Vollenbroek-Hutten, Miriam M R
2010-01-01
We designed a business model for deploying a myofeedback-based teletreatment service. An iterative and combined qualitative and quantitative action design approach was used for developing the business model and the related value network. Insights from surveys, desk research, expert interviews, workshops and quantitative modelling were combined to produce the first business model and then to refine it in three design cycles. The business model engineering strategy provided important insights which led to an improved, more viable and feasible business model and related value network design. Based on this experience, we conclude that the process of early stage business model engineering reduces risk and produces substantial savings in costs and resources related to service deployment.
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Extension of nanoconfined DNA: Quantitative comparison between experiment and theory
NASA Astrophysics Data System (ADS)
Iarko, V.; Werner, E.; Nyberg, L. K.; Müller, V.; Fritzsche, J.; Ambjörnsson, T.; Beech, J. P.; Tegenfeldt, J. O.; Mehlig, K.; Westerlund, F.; Mehlig, B.
2015-12-01
The extension of DNA confined to nanochannels has been studied intensively and in detail. However, quantitative comparisons between experiments and model calculations are difficult because most theoretical predictions involve undetermined prefactors, and because the model parameters (contour length, Kuhn length, effective width) are difficult to compute reliably, leading to substantial uncertainties. Here we use a recent asymptotically exact theory for the DNA extension in the "extended de Gennes regime" that allows us to compare experimental results with theory. For this purpose, we performed experiments measuring the mean DNA extension and its standard deviation while varying the channel geometry, dye intercalation ratio, and ionic strength of the buffer. The experimental results agree very well with theory at high ionic strengths, indicating that the model parameters are reliable. At low ionic strengths, the agreement is less good. We discuss possible reasons. In principle, our approach allows us to measure the Kuhn length and the effective width of a single DNA molecule and more generally of semiflexible polymers in solution.
de Monchy, Romain; Rouyer, Julien; Destrempes, François; Chayer, Boris; Cloutier, Guy; Franceschini, Emilie
2018-04-01
Quantitative ultrasound techniques based on the backscatter coefficient (BSC) have been commonly used to characterize red blood cell (RBC) aggregation. Specifically, a scattering model is fitted to measured BSC and estimated parameters can provide a meaningful description of the RBC aggregates' structure (i.e., aggregate size and compactness). In most cases, scattering models assumed monodisperse RBC aggregates. This study proposes the Effective Medium Theory combined with the polydisperse Structure Factor Model (EMTSFM) to incorporate the polydispersity of aggregate size. From the measured BSC, this model allows estimating three structural parameters: the mean radius of the aggregate size distribution, the width of the distribution, and the compactness of the aggregates. Two successive experiments were conducted: a first experiment on blood sheared in a Couette flow device coupled with an ultrasonic probe, and a second experiment, on the same blood sample, sheared in a plane-plane rheometer coupled to a light microscope. Results demonstrated that the polydisperse EMTSFM provided the best fit to the BSC data when compared to the classical monodisperse models for the higher levels of aggregation at hematocrits between 10% and 40%. Fitting the polydisperse model yielded aggregate size distributions that were consistent with direct light microscope observations at low hematocrits.
Quantitative Structure – Property Relationship Modeling of Remote Liposome Loading Of Drugs
Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram
2012-01-01
Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a dataset including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and five-fold external validation. The external prediction accuracy for binary models was as high as 91–96%; for continuous models the mean coefficient R2 for regression between predicted versus observed values was 0.76–0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. PMID:22154932
Quantitative reactive modeling and verification.
Henzinger, Thomas A
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.
2013-01-01
ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.
Modeling of Receptor Tyrosine Kinase Signaling: Computational and Experimental Protocols.
Fey, Dirk; Aksamitiene, Edita; Kiyatkin, Anatoly; Kholodenko, Boris N
2017-01-01
The advent of systems biology has convincingly demonstrated that the integration of experiments and dynamic modelling is a powerful approach to understand the cellular network biology. Here we present experimental and computational protocols that are necessary for applying this integrative approach to the quantitative studies of receptor tyrosine kinase (RTK) signaling networks. Signaling by RTKs controls multiple cellular processes, including the regulation of cell survival, motility, proliferation, differentiation, glucose metabolism, and apoptosis. We describe methods of model building and training on experimentally obtained quantitative datasets, as well as experimental methods of obtaining quantitative dose-response and temporal dependencies of protein phosphorylation and activities. The presented methods make possible (1) both the fine-grained modeling of complex signaling dynamics and identification of salient, course-grained network structures (such as feedback loops) that bring about intricate dynamics, and (2) experimental validation of dynamic models.
Wu, Wensheng; Zhang, Canyang; Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan
2015-01-01
Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments.
Lin, Wenjing; Chen, Quan; Guo, Xindong; Qian, Yu; Zhang, Lijuan
2015-01-01
Self-assembled nano-micelles of amphiphilic polymers represent a novel anticancer drug delivery system. However, their full clinical utilization remains challenging because the quantitative structure-property relationship (QSPR) between the polymer structure and the efficacy of micelles as a drug carrier is poorly understood. Here, we developed a series of QSPR models to account for the drug loading capacity of polymeric micelles using the genetic function approximation (GFA) algorithm. These models were further evaluated by internal and external validation and a Y-randomization test in terms of stability and generalization, yielding an optimization model that is applicable to an expanded materials regime. As confirmed by experimental data, the relationship between microstructure and drug loading capacity can be well-simulated, suggesting that our models are readily applicable to the quantitative evaluation of the drug-loading capacity of polymeric micelles. Our work may offer a pathway to the design of formulation experiments. PMID:25780923
Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru
2009-04-27
Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I that cannot be applied with qualitative based model checking, is more reasonable than Rule II owing to the high coverage of predicted fate patterns (except for the genotype of lin-15ko; lin-12ko double mutants). More insights are also suggested. The quantitative simulation-based model checking approach is a useful means to provide us valuable biological insights and better understandings of biological systems and observation data that may be hard to capture with the qualitative one.
NASA Technical Reports Server (NTRS)
Chilingaryan, A. A.; Galfayan, S. K.; Zazyan, M. Z.; Dunaevsky, A. M.
1985-01-01
Nonparametric statistical methods are used to carry out the quantitative comparison of the model and the experimental data. The same methods enable one to select the events initiated by the heavy nuclei and to calculate the portion of the corresponding events. For this purpose it is necessary to have the data on artificial events describing the experiment sufficiently well established. At present, the model with the small scaling violation in the fragmentation region is the closest to the experiments. Therefore, the treatment of gamma families obtained in the Pamir' experiment is being carried out at present with the application of these models.
A Quantitative Model of Early Atherosclerotic Plaques Parameterized Using In Vitro Experiments.
Thon, Moritz P; Ford, Hugh Z; Gee, Michael W; Myerscough, Mary R
2018-01-01
There are a growing number of studies that model immunological processes in the artery wall that lead to the development of atherosclerotic plaques. However, few of these models use parameters that are obtained from experimental data even though data-driven models are vital if mathematical models are to become clinically relevant. We present the development and analysis of a quantitative mathematical model for the coupled inflammatory, lipid and macrophage dynamics in early atherosclerotic plaques. Our modeling approach is similar to the biologists' experimental approach where the bigger picture of atherosclerosis is put together from many smaller observations and findings from in vitro experiments. We first develop a series of three simpler submodels which are least-squares fitted to various in vitro experimental results from the literature. Subsequently, we use these three submodels to construct a quantitative model of the development of early atherosclerotic plaques. We perform a local sensitivity analysis of the model with respect to its parameters that identifies critical parameters and processes. Further, we present a systematic analysis of the long-term outcome of the model which produces a characterization of the stability of model plaques based on the rates of recruitment of low-density lipoproteins, high-density lipoproteins and macrophages. The analysis of the model suggests that further experimental work quantifying the different fates of macrophages as a function of cholesterol load and the balance between free cholesterol and cholesterol ester inside macrophages may give valuable insight into long-term atherosclerotic plaque outcomes. This model is an important step toward models applicable in a clinical setting.
Comfort and Accessibility Evaluation of Light Rail Vehicles
NASA Astrophysics Data System (ADS)
Hirasawa, Takayuki; Matsuoka, Shigeki; Suda, Yoshihiro
A quantitative evaluation method for passenger rooms of light rail vehicles from viewpoint of comfort and accessibility is proposed as the result of physical modeling of in-vehicle behavior of passengers upon Gibson's ecological psychology approach. The model parameters are identified from experiments at real vehicles at the depot of Kumamoto municipal transport and at the full-scale mockup of the University of Tokyo. The developed model has realized quantitative evaluation of floor lowering effects by abolishing internal steps at passenger doorways and door usage restriction scenarios from viewpoint of both passengers and operators in comparison to commuter railway vehicles.
A computational model of selection by consequences.
McDowell, J J
2004-05-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior.
The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model
Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim
2013-01-01
There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less
A statistical framework for protein quantitation in bottom-up MS-based proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less
Models of volcanic eruption hazards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohletz, K.H.
1992-01-01
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluidmore » flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.« less
Models of volcanic eruption hazards
NASA Astrophysics Data System (ADS)
Wohletz, K. H.
Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.
A Research Methodology for Studying What Makes Some Problems Difficult to Solve
ERIC Educational Resources Information Center
Gulacar, Ozcan; Fynewever, Herb
2010-01-01
We present a quantitative model for predicting the level of difficulty subjects will experience with specific problems. The model explicitly accounts for the number of subproblems a problem can be broken into and the difficultly of each subproblem. Although the model builds on previously published models, it is uniquely suited for blending with…
A quantitative speciation model for the adsorption of organic pollutants on activated carbon.
Grivé, M; García, D; Domènech, C; Richard, L; Rojo, I; Martínez, X; Rovira, M
2013-01-01
Granular activated carbon (GAC) is commonly used as adsorbent in water treatment plants given its high capacity for retaining organic pollutants in aqueous phase. The current knowledge on GAC behaviour is essentially empirical, and no quantitative description of the chemical relationships between GAC surface groups and pollutants has been proposed. In this paper, we describe a quantitative model for the adsorption of atrazine onto GAC surface. The model is based on results of potentiometric titrations and three types of adsorption experiments which have been carried out in order to determine the nature and distribution of the functional groups on the GAC surface, and evaluate the adsorption characteristics of GAC towards atrazine. Potentiometric titrations have indicated the existence of at least two different families of chemical groups on the GAC surface, including phenolic- and benzoic-type surface groups. Adsorption experiments with atrazine have been satisfactorily modelled with the geochemical code PhreeqC, assuming that atrazine is sorbed onto the GAC surface in equilibrium (log Ks = 5.1 ± 0.5). Independent thermodynamic calculations suggest a possible adsorption of atrazine on a benzoic derivative. The present work opens a new approach for improving the adsorption capabilities of GAC towards organic pollutants by modifying its chemical properties.
Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments
ERIC Educational Resources Information Center
Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.
2009-01-01
The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…
Rheological properties of aging thermosensitive suspensions.
Purnomo, Eko H; van den Ende, Dirk; Mellema, Jorrit; Mugele, Frieder
2007-08-01
Aging observed in soft glassy materials inherently affects the rheological properties of these systems and has been described by the soft glassy rheology (SGR) model [S. M. Fielding, J. Rheol. 44, 323 (2000)]. In this paper, we report the measured linear rheological behavior of thermosensitive microgel suspensions and compare it quantitatively with the predictions of the SGR model. The dynamic moduli [G'(omega,t) and G''(omega,t)] obtained from oscillatory measurements are in good agreement with the model. The model also predicts quantitatively the creep compliance J(t - t(w),t(w)), obtained from step stress experiments, for the short time regime [(t - t(w)) < t(w)]. The relative effective temperature X/X(g) obtained from both the oscillatory and the step stress experiments is indeed less than 1 (XX(g) < 1) in agreement with the definition of aging. Moreover, the elasticity of the compressed particles (G(p)) increases with increased compression, i.e., the degree of hindrance and consequently also the bulk elasticity (G' and 1/J) increases with the degree of compression.
Rheological properties of aging thermosensitive suspensions
NASA Astrophysics Data System (ADS)
Purnomo, Eko H.; van den Ende, Dirk; Mellema, Jorrit; Mugele, Frieder
2007-08-01
Aging observed in soft glassy materials inherently affects the rheological properties of these systems and has been described by the soft glassy rheology (SGR) model [S. M. Fielding , J. Rheol. 44, 323 (2000)]. In this paper, we report the measured linear rheological behavior of thermosensitive microgel suspensions and compare it quantitatively with the predictions of the SGR model. The dynamic moduli [ G'(ω,t) and G″(ω,t) ] obtained from oscillatory measurements are in good agreement with the model. The model also predicts quantitatively the creep compliance J(t-tw,tw) , obtained from step stress experiments, for the short time regime [(t-tw)
Quantitative structure-property relationship modeling of remote liposome loading of drugs.
Cern, Ahuva; Golbraikh, Alexander; Sedykh, Aleck; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram
2012-06-10
Remote loading of liposomes by trans-membrane gradients is used to achieve therapeutically efficacious intra-liposome concentrations of drugs. We have developed Quantitative Structure Property Relationship (QSPR) models of remote liposome loading for a data set including 60 drugs studied in 366 loading experiments internally or elsewhere. Both experimental conditions and computed chemical descriptors were employed as independent variables to predict the initial drug/lipid ratio (D/L) required to achieve high loading efficiency. Both binary (to distinguish high vs. low initial D/L) and continuous (to predict real D/L values) models were generated using advanced machine learning approaches and 5-fold external validation. The external prediction accuracy for binary models was as high as 91-96%; for continuous models the mean coefficient R(2) for regression between predicted versus observed values was 0.76-0.79. We conclude that QSPR models can be used to identify candidate drugs expected to have high remote loading capacity while simultaneously optimizing the design of formulation experiments. Copyright © 2011 Elsevier B.V. All rights reserved.
A cascading failure model for analyzing railway accident causation
NASA Astrophysics Data System (ADS)
Liu, Jin-Tao; Li, Ke-Ping
2018-01-01
In this paper, a new cascading failure model is proposed for quantitatively analyzing the railway accident causation. In the model, the loads of nodes are redistributed according to the strength of the causal relationships between the nodes. By analyzing the actual situation of the existing prevention measures, a critical threshold of the load parameter in the model is obtained. To verify the effectiveness of the proposed cascading model, simulation experiments of a train collision accident are performed. The results show that the cascading failure model can describe the cascading process of the railway accident more accurately than the previous models, and can quantitatively analyze the sensitivities and the influence of the causes. In conclusion, this model can assist us to reveal the latent rules of accident causation to reduce the occurrence of railway accidents.
ERIC Educational Resources Information Center
Rodriguez-Barbero, A.; Lopez-Novoa, J. M.
2008-01-01
One of the problems that we have found when teaching human physiology in a Spanish medical school is that the degree of understanding by the students of the integration between organs and systems is rather poor. We attempted to remedy this problem by using a case discussion method together with the Quantitative Circulatory Physiology (QCP)…
Mode I Failure of Armor Ceramics: Experiments and Modeling
NASA Astrophysics Data System (ADS)
Meredith, Christopher; Leavy, Brian
2017-06-01
The pre-notched edge on impact (EOI) experiment is a technique for benchmarking the damage and fracture of ceramics subjected to projectile impact. A cylindrical projectile impacts the edge of a thin rectangular plate with a pre-notch on the opposite edge. Tension is generated at the notch tip resulting in the initiation and propagation of a mode I crack back toward the impact edge. The crack can be quantitatively measured using an optical method called Digital Gradient Sensing, which measures the crack-tip deformation by simultaneously quantifying two orthogonal surface slopes via measuring small deflections of light rays from a specularly reflective surface around the crack. The deflections in ceramics are small so the high speed camera needs to have a very high pixel count. This work reports on the results from pre-crack EOI experiments of SiC and B4 C plates. The experimental data are quantitatively compared to impact simulations using an advanced continuum damage model. The Kayenta ceramic model in Alegra will be used to compare fracture propagation speeds, bifurcations and inhomogeneous initiation of failure will be compared. This will provide insight into the driving mechanisms required for the macroscale failure modeling of ceramics.
Optimization of time-course experiments for kinetic model discrimination.
Lages, Nuno F; Cordeiro, Carlos; Sousa Silva, Marta; Ponces Freire, Ana; Ferreira, António E N
2012-01-01
Systems biology relies heavily on the construction of quantitative models of biochemical networks. These models must have predictive power to help unveiling the underlying molecular mechanisms of cellular physiology, but it is also paramount that they are consistent with the data resulting from key experiments. Often, it is possible to find several models that describe the data equally well, but provide significantly different quantitative predictions regarding particular variables of the network. In those cases, one is faced with a problem of model discrimination, the procedure of rejecting inappropriate models from a set of candidates in order to elect one as the best model to use for prediction.In this work, a method is proposed to optimize the design of enzyme kinetic assays with the goal of selecting a model among a set of candidates. We focus on models with systems of ordinary differential equations as the underlying mathematical description. The method provides a design where an extension of the Kullback-Leibler distance, computed over the time courses predicted by the models, is maximized. Given the asymmetric nature this measure, a generalized differential evolution algorithm for multi-objective optimization problems was used.The kinetics of yeast glyoxalase I (EC 4.4.1.5) was chosen as a difficult test case to evaluate the method. Although a single-substrate kinetic model is usually considered, a two-substrate mechanism has also been proposed for this enzyme. We designed an experiment capable of discriminating between the two models by optimizing the initial substrate concentrations of glyoxalase I, in the presence of the subsequent pathway enzyme, glyoxalase II (EC 3.1.2.6). This discriminatory experiment was conducted in the laboratory and the results indicate a two-substrate mechanism for the kinetics of yeast glyoxalase I.
A Method for Label-Free, Differential Top-Down Proteomics.
Ntai, Ioanna; Toby, Timothy K; LeDuc, Richard D; Kelleher, Neil L
2016-01-01
Biomarker discovery in the translational research has heavily relied on labeled and label-free quantitative bottom-up proteomics. Here, we describe a new approach to biomarker studies that utilizes high-throughput top-down proteomics and is the first to offer whole protein characterization and relative quantitation within the same experiment. Using yeast as a model, we report procedures for a label-free approach to quantify the relative abundance of intact proteins ranging from 0 to 30 kDa in two different states. In this chapter, we describe the integrated methodology for the large-scale profiling and quantitation of the intact proteome by liquid chromatography-mass spectrometry (LC-MS) without the need for metabolic or chemical labeling. This recent advance for quantitative top-down proteomics is best implemented with a robust and highly controlled sample preparation workflow before data acquisition on a high-resolution mass spectrometer, and the application of a hierarchical linear statistical model to account for the multiple levels of variance contained in quantitative proteomic comparisons of samples for basic and clinical research.
ERIC Educational Resources Information Center
Bonney, Lewis Alfred
This study is concerned with the manner in which experience with concrete, quantitative, interpersonal, and verbal content influences the development of ability patterns in first grade children. The literature related to theoretical models of intellectual development indicates that abilities develop in response to experiential variables, such as…
Experimental Control of Simple Pendulum Model
ERIC Educational Resources Information Center
Medina, C.
2004-01-01
This paper conveys information about a Physics laboratory experiment for students with some theoretical knowledge about oscillatory motion. Students construct a simple pendulum that behaves as an ideal one, and analyze model assumption incidence on its period. The following aspects are quantitatively analyzed: vanishing friction, small amplitude,…
A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less
A quantitative test of population genetics using spatiogenetic patterns in bacterial colonies.
Korolev, Kirill S; Xavier, João B; Nelson, David R; Foster, Kevin R
2011-10-01
It is widely accepted that population-genetics theory is the cornerstone of evolutionary analyses. Empirical tests of the theory, however, are challenging because of the complex relationships between space, dispersal, and evolution. Critically, we lack quantitative validation of the spatial models of population genetics. Here we combine analytics, on- and off-lattice simulations, and experiments with bacteria to perform quantitative tests of the theory. We study two bacterial species, the gut microbe Escherichia coli and the opportunistic pathogen Pseudomonas aeruginosa, and show that spatiogenetic patterns in colony biofilms of both species are accurately described by an extension of the one-dimensional stepping-stone model. We use one empirical measure, genetic diversity at the colony periphery, to parameterize our models and show that we can then accurately predict another key variable: the degree of short-range cell migration along an edge. Moreover, the model allows us to estimate other key parameters, including effective population size (density) at the expansion frontier. While our experimental system is a simplification of natural microbial community, we argue that it constitutes proof of principle that the spatial models of population genetics can quantitatively capture organismal evolution.
ERIC Educational Resources Information Center
Anderson, James L.; And Others
1980-01-01
Presents an undergraduate quantitative analysis experiment, describing an atomic absorption quantitation scheme that is fast, sensitive and comparatively simple relative to other titration experiments. (CS)
NASA Astrophysics Data System (ADS)
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-08-01
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.
A three-dimensional finite element model of near-field scanning microwave microscopy
NASA Astrophysics Data System (ADS)
Balusek, Curtis; Friedman, Barry; Luna, Darwin; Oetiker, Brian; Babajanyan, Arsen; Lee, Kiejin
2012-10-01
A three-dimensional finite element model of an experimental near-field scanning microwave microscope (NSMM) has been developed and compared to experiment on non conducting samples. The microwave reflection coefficient S11 is calculated as a function of frequency with no adjustable parameters. There is qualitative agreement with experiment in that the resonant frequency can show a sizable increase with sample dielectric constant; a result that is not obtained with a two-dimensional model. The most realistic model shows a semi-quantitative agreement with experiment. The effect of different sample thicknesses and varying tip sample distances is investigated numerically and shown to effect NSMM performance in a way consistent with experiment. Visualization of the electric field indicates that the field is primarily determined by the shape of the coupling hooks.
Magnetically launched flyer plate technique for probing electrical conductivity of compressed copper
Cochrane, Kyle R.; Lemke, Raymond W.; Riford, Z.; ...
2016-03-11
The electrical conductivity of materials under extremes of temperature and pressure is of crucial importance for a wide variety of phenomena, including planetary modeling, inertial confinement fusion, and pulsed power based dynamic materialsexperiments. There is a dearth of experimental techniques and data for highly compressed materials, even at known states such as along the principal isentrope and Hugoniot, where many pulsed power experiments occur. We present a method for developing, calibrating, and validating material conductivity models as used in magnetohydrodynamic(MHD) simulations. The difficulty in calibrating a conductivity model is in knowing where the model should be modified. Our method isolatesmore » those regions that will have an impact. It also quantitatively prioritizes which regions will have the most beneficial impact. Finally, it tracks the quantitative improvements to the conductivity model during each incremental adjustment. In this study, we use an experiment on Sandia National Laboratories Z-machine to isentropically launch multiple flyer plates and, with the MHD code ALEGRA and the optimization code DAKOTA, calibrated the conductivity such that we matched an experimental figure of merit to +/–1%.« less
Quantitative evaluation of analyte transport on microfluidic paper-based analytical devices (μPADs).
Ota, Riki; Yamada, Kentaro; Suzuki, Koji; Citterio, Daniel
2018-02-07
The transport efficiency during capillary flow-driven sample transport on microfluidic paper-based analytical devices (μPADs) made from filter paper has been investigated for a selection of model analytes (Ni 2+ , Zn 2+ , Cu 2+ , PO 4 3- , bovine serum albumin, sulforhodamine B, amaranth) representing metal cations, complex anions, proteins and anionic molecules. For the first time, the transport of the analytical target compounds rather than the sample liquid, has been quantitatively evaluated by means of colorimetry and absorption spectrometry-based methods. The experiments have revealed that small paperfluidic channel dimensions, additional user operation steps (e.g. control of sample volume, sample dilution, washing step) as well as the introduction of sample liquid wicking areas allow to increase analyte transport efficiency. It is also shown that the interaction of analytes with the negatively charged cellulosic paper substrate surface is strongly influenced by the physico-chemical properties of the model analyte and can in some cases (Cu 2+ ) result in nearly complete analyte depletion during sample transport. The quantitative information gained through these experiments is expected to contribute to the development of more sensitive μPADs.
The Quantitative Preparation of Future Geoscience Graduate Students
NASA Astrophysics Data System (ADS)
Manduca, C. A.; Hancock, G. S.
2006-12-01
Modern geoscience is a highly quantitative science. In February, a small group of faculty and graduate students from across the country met to discuss the quantitative preparation of geoscience majors for graduate school. The group included ten faculty supervising graduate students in quantitative areas spanning the earth, atmosphere, and ocean sciences; five current graduate students in these areas; and five faculty teaching undergraduate students in the spectrum of institutions preparing students for graduate work. Discussion focused in four key ares: Are incoming graduate students adequately prepared for the quantitative aspects of graduate geoscience programs? What are the essential quantitative skills are that are required for success in graduate school? What are perceived as the important courses to prepare students for the quantitative aspects of graduate school? What programs/resources would be valuable in helping faculty/departments improve the quantitative preparation of students? The participants concluded that strengthening the quantitative preparation of undergraduate geoscience majors would increase their opportunities in graduate school. While specifics differed amongst disciplines, a special importance was placed on developing the ability to use quantitative skills to solve geoscience problems. This requires the ability to pose problems so they can be addressed quantitatively, understand the relationship between quantitative concepts and physical representations, visualize mathematics, test the reasonableness of quantitative results, creatively move forward from existing models/techniques/approaches, and move between quantitative and verbal descriptions. A list of important quantitative competencies desirable in incoming graduate students includes mechanical skills in basic mathematics, functions, multi-variate analysis, statistics and calculus, as well as skills in logical analysis and the ability to learn independently in quantitative ways. Calculus, calculus-based physics, chemistry, statistics, programming and linear algebra were viewed as important course preparation for a successful graduate experience. A set of recommendations for departments and for new community resources includes ideas for infusing quantitative reasoning throughout the undergraduate experience and mechanisms for learning from successful experiments in both geoscience and mathematics. A full list of participants, summaries of the meeting discussion and recommendations are available at http://serc.carleton.edu/quantskills/winter06/index.html. These documents, crafted by a small but diverse group can serve as a starting point for broader community discussion of the quantitative preparation of future geoscience graduate students.
A Statistical Decision Model for Periodical Selection for a Specialized Information Center
ERIC Educational Resources Information Center
Dym, Eleanor D.; Shirey, Donald L.
1973-01-01
An experiment is described which attempts to define a quantitative methodology for the identification and evaluation of all possibly relevant periodical titles containing toxicological-biological information. A statistical decision model was designed and employed, along with yes/no criteria questions, a training technique and a quality control…
e-Learning Success Model: An Information Systems Perspective
ERIC Educational Resources Information Center
Lee-Post, Anita
2009-01-01
This paper reports the observations made and experience gained from developing and delivering an online quantitative methods course for Business undergraduates. Inspired by issues and challenges experienced in developing the online course, a model is advanced to address the question of how to guide the design, development, and delivery of…
Inherently unstable networks collapse to a critical point
NASA Astrophysics Data System (ADS)
Sheinman, M.; Sharma, A.; Alvarado, J.; Koenderink, G. H.; MacKintosh, F. C.
2015-07-01
Nonequilibrium systems that are driven or drive themselves towards a critical point have been studied for almost three decades. Here we present a minimalist example of such a system, motivated by experiments on collapsing active elastic networks. Our model of an unstable elastic network exhibits a collapse towards a critical point from any macroscopically connected initial configuration. Taking into account steric interactions within the network, the model qualitatively and quantitatively reproduces results of the experiments on collapsing active gels.
Transient deformation of a droplet near a microfluidic constriction: A quantitative analysis
NASA Astrophysics Data System (ADS)
Trégouët, Corentin; Salez, Thomas; Monteux, Cécile; Reyssat, Mathilde
2018-05-01
We report on experiments that consist of deforming a collection of monodisperse droplets produced by a microfluidic chip through a flow-focusing device. We show that a proper numerical modeling of the flow is necessary to access the stress applied by the latter on the droplet along its trajectory through the chip. This crucial step enables the full integration of the differential equation governing the dynamical deformation, and consequently the robust measurement of the interfacial tension by fitting the experiments with the calculated deformation. Our study thus demonstrates the feasibility of quantitative in situ rheology in microfluidic flows involving, e.g., droplets, capsules, or cells.
A computational model of selection by consequences.
McDowell, J J
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512
Behavioral momentum and resurgence: Effects of time in extinction and repeated resurgence tests
Shahan, Timothy A.
2014-01-01
Resurgence is an increase in a previously extinguished operant response that occurs if an alternative reinforcement introduced during extinction is removed. Shahan and Sweeney (2011) developed a quantitative model of resurgence based on behavioral momentum theory that captures existing data well and predicts that resurgence should decrease as time in extinction and exposure to the alternative reinforcement increases. Two experiments tested this prediction. The data from Experiment 1 suggested that without a return to baseline, resurgence decreases with increased exposure to alternative reinforcement and to extinction of the target response. Experiment 2 tested the predictions of the model across two conditions, one with constant alternative reinforcement for five sessions, and the other with alternative reinforcement removed three times. In both conditions, the alternative reinforcement was removed for the final test session. Experiment 2 again demonstrated a decrease in relapse across repeated resurgence tests. Furthermore, comparably little resurgence was observed at the same time point in extinction in the final test, despite dissimilar previous exposures to alternative reinforcement removal. The quantitative model provided a good description of the observed data in both experiments. More broadly, these data suggest that increased exposure to extinction may be a successful strategy to reduce resurgence. The relationship between these data and existing tests of the effect of time in extinction on resurgence is discussed. PMID:23982985
Getting quantitative about consequences of cross-ecosystem resource subsidies on recipient consumers
Richardson, John S.; Wipfli, Mark S.
2016-01-01
Most studies of cross-ecosystem resource subsidies have demonstrated positive effects on recipient consumer populations, often with very large effect sizes. However, it is important to move beyond these initial addition–exclusion experiments to consider the quantitative consequences for populations across gradients in the rates and quality of resource inputs. In our introduction to this special issue, we describe at least four potential models that describe functional relationships between subsidy input rates and consumer responses, most of them asymptotic. Here we aim to advance our quantitative understanding of how subsidy inputs influence recipient consumers and their communities. In the papers following, fish were either the recipient consumers or the subsidy as carcasses of anadromous species. Advancing general, predictive models will enable us to further consider what other factors are potentially co-limiting (e.g., nutrients, other population interactions, physical habitat, etc.) and better integrate resource subsidies into consumer–resource, biophysical dynamics models.
Barker, Estelle; McCracken, Lance M
2014-08-01
Health care organizations, both large and small, frequently undergo processes of change. In fact, if health care organizations are to improve over time, they must change; this includes pain services. The purpose of the present study was to examine a process of change in treatment model within a specialty interdisciplinary pain service in the UK. This change entailed a switch from traditional cognitive-behavioural therapy to a form of cognitive-behavioural therapy called acceptance and commitment therapy. An anonymous online survey, including qualitative and quantitative components, was carried out approximately 15 months after the initial introduction of the new treatment model and methods. Fourteen out of 16 current clinical staff responded to the survey. Three themes emerged in qualitative analyses: positive engagement in change; uncertainty and discomfort; and group cohesion versus discord. Quantitative results from closed questions showed a pattern of uncertainty about the superiority of one model over the other, combined with more positive views on progress reflected, and the experience of personal benefits, from adopting the new model. The psychological flexibility model, the model behind acceptance and commitment therapy, may clarify both processes in patient behaviour and processes of staff experience and skilful treatment delivery. This integration of processes on both sides of treatment delivery may be a strength of acceptance and commitment therapy.
NASA Astrophysics Data System (ADS)
Ivanova, Bojidarka; Spiteller, Michael
2017-12-01
The present paper deals with quantitative kinetics and thermodynamics of collision induced dissociation (CID) reactions of piperazines under different experimental conditions together with a systematic description of effect of counter-ions on common MS fragment reactions of piperazines; and intra-molecular effect of quaternary cyclization of substituted piperazines yielding to quaternary salts. There are discussed quantitative model equations of rate constants as well as free Gibbs energies of series of m-independent CID fragment processes in GP, which have been evidenced experimentally. Both kinetic and thermodynamic parameters are also predicted by computational density functional theory (DFT) and ab initio both static and dynamic methods. The paper examines validity of Maxwell-Boltzmann distribution to non-Boltzmann CID processes in quantitatively as well. The experiments conducted within the latter framework yield to an excellent correspondence with theoretical quantum chemical modeling. The important property of presented model equations of reaction kinetics is the applicability in predicting unknown and assigning of known mass spectrometric (MS) patterns. The nature of "GP" continuum of CID-MS coupled scheme of measurements with electrospray ionization (ESI) source is discussed, performing parallel computations in gas-phase (GP) and polar continuum at different temperatures and ionic strengths. The effect of pressure is presented. The study contributes significantly to methodological and phenomenological developments of CID-MS and its analytical implementations for quantitative and structural analyses. It also demonstrates great prospective of a complementary application of experimental CID-MS and computational quantum chemistry studying chemical reactivity, among others. To a considerable extend this work underlies the place of computational quantum chemistry to the field of experimental analytical chemistry in particular highlighting the structural analysis.
Ernst, Marielle; Kriston, Levente; Romero, Javier M; Frölich, Andreas M; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik
2016-01-01
We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time.
NASA Astrophysics Data System (ADS)
Morgenthaler, George W.; Nuñez, German R.; Botello, Aaron M.; Soto, Jose; Shrairman, Ruth; Landau, Alexander
1998-01-01
Many reaction time experiments have been conducted over the years to observe human responses. However, most of the experiments that were performed did not have quantitatively accurate instruments for measuring change in reaction time under stress. There is a great need for quantitative instruments to measure neuromuscular reaction responses under stressful conditions such as distraction, disorientation, disease, alcohol, drugs, etc. The two instruments used in the experiments reported in this paper are such devices. Their accuracy, portability, ease of use, and biometric character are what makes them very special. PACE™ is a software model used to measure reaction time. VeriFax's Impairoscope measures the deterioration of neuromuscular responses. During the 1997 Summer Semester, various reaction time experiments were conducted on University of Colorado faculty, staff, and students using the PACE™ system. The tests included both two-eye and one-eye unstressed trials and trials with various stresses such as fatigue, distractions in which subjects were asked to perform simple arithmetic during the PACE™ tests, and stress due to rotating-chair dizziness. Various VeriFax Impairoscope tests, both stressed and unstressed, were conducted to determine the Impairoscope's ability to quantitatively measure this impairment. In the 1997 Fall Semester, a Phase II effort was undertaken to increase test sample sizes in order to provide statistical precision and stability. More sophisticated statistical methods remain to be applied to better interpret the data.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
Kinetics of Cd(ii) adsorption and desorption on ferrihydrite: experiments and modeling.
Liang, Yuzhen; Tian, Lei; Lu, Yang; Peng, Lanfang; Wang, Pei; Lin, Jingyi; Cheng, Tao; Dang, Zhi; Shi, Zhenqing
2018-05-15
The kinetics of Cd(ii) adsorption/desorption on ferrihydrite is an important process affecting the fate, transport, and bioavailability of Cd(ii) in the environment, which was rarely systematically studied and understood at quantitative levels. In this work, a combination of stirred-flow kinetic experiments, batch adsorption equilibrium experiments, high-resolution transmission electron microscopy (HR-TEM), and mechanistic kinetic modeling were used to study the kinetic behaviors of Cd(ii) adsorption/desorption on ferrihydrite. HR-TEM images showed the open, loose, and sponge-like structure of ferrihydrite. The batch adsorption equilibrium experiments revealed that higher pH and initial metal concentration increased Cd(ii) adsorption on ferrihydrite. The stirred-flow kinetic results demonstrated the increased adsorption rate and capacity as a result of the increased pH, influent concentration, and ferrihydrite concentration. The mechanistic kinetic model successfully described the kinetic behaviors of Cd(ii) during the adsorption and desorption stages under various chemistry conditions. The model calculations showed that the adsorption rate coefficients varied as a function of solution chemistry, and the relative contributions of the weak and strong ferrihydrite sites for Cd(ii) binding varied with time at different pH and initial metal concentrations. Our model is able to quantitatively assess the contributions of each individual ferrihydrite binding site to the overall Cd(ii) adsorption/desorption kinetics. This study provided insights into the dynamic behavior of Cd(ii) and a predictive modeling tool for Cd(ii) adsorption/desorption kinetics when ferrihydrite is present, which may be helpful for the risk assessment and management of Cd contaminated sites.
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
ERIC Educational Resources Information Center
Duffy, Debra Lynne Foster
2012-01-01
Through a non-experimental descriptive and comparative mixed-methods approach, this study investigated the experiences of sixth grade earth science students with groundwater physical models through an extended SE learning cycle format. The data collection was based on a series of quantitative and qualitative research tools intended to investigate…
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model.
Liu, Fang; Velikina, Julia V; Block, Walter F; Kijowski, Richard; Samsonov, Alexey A
2017-02-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexible representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplified treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼ 200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure.
Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model
Velikina, Julia V.; Block, Walter F.; Kijowski, Richard; Samsonov, Alexey A.
2017-01-01
We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexibl representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplifie treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure. PMID:28113746
Punishment in human choice: direct or competitive suppression?
Critchfield, Thomas S; Paletz, Elliott M; MacAleese, Kenneth R; Newland, M Christopher
2003-01-01
This investigation compared the predictions of two models describing the integration of reinforcement and punishment effects in operant choice. Deluty's (1976) competitive-suppression model (conceptually related to two-factor punishment theories) and de Villiers' (1980) direct-suppression model (conceptually related to one-factor punishment theories) have been tested previously in nonhumans but not at the individual level in humans. Mouse clicking by college students was maintained in a two-alternative concurrent schedule of variable-interval money reinforcement. Punishment consisted of variable-interval money losses. Experiment 1 verified that money loss was an effective punisher in this context. Experiment 2 consisted of qualitative model comparisons similar to those used in previous studies involving nonhumans. Following a no-punishment baseline, punishment was superimposed upon both response alternatives. Under schedule values for which the direct-suppression model, but not the competitive-suppression model, predicted distinct shifts from baseline performance, or vice versa, 12 of 14 individual-subject functions, generated by 7 subjects, supported the direct-suppression model. When the punishment models were converted to the form of the generalized matching law, least-squares linear regression fits for a direct-suppression model were superior to those of a competitive-suppression model for 6 of 7 subjects. In Experiment 3, a more thorough quantitative test of the modified models, fits for a direct-suppression model were superior in 11 of 13 cases. These results correspond well to those of investigations conducted with nonhumans and provide the first individual-subject evidence that a direct-suppression model, evaluated both qualitatively and quantitatively, describes human punishment better than a competitive-suppression model. We discuss implications for developing better punishment models and future investigations of punishment in human choice. PMID:13677606
Study On The Application Of CBERS-02B To Quantitative Soil Erosion Monitoring
NASA Astrophysics Data System (ADS)
Shi, Mingchang; Xu, Jing; Wang, Lei; Wang, Xiaoyun; Mu, Jing
2010-10-01
Currently, the reduction of soil erosion is an important prerequisite for achieving ecological security. Since real-time and quantitative evaluation on regional soil erosion plays a significant role in reducing the soil erosion, soil erosion models are more and more widely used. Based on RUSLE model, this paper carries out the quantitative soil erosion monitoring in the Xi River Basin and its surrounding areas by using CBERS-02B CCD, DEM, TRMM and other data. Besides, it performs the validation for monitoring results by using remote sensing investigation results in 2005. The monitoring results show that in 2009, the total amount of soil erosion in the study area was 1.94×106t, the erosion area was 2055.2km2 (54.06% of the total area), and the average soil erosion modulus was 509.7t km-2 a-1. As a case using CBERS-02B data for quantitative soil erosion monitoring, this study provides experience on the application of CBERS-02B data in the field of quantitative soil erosion monitoring and also for local soil erosion management.
Oxidative DNA damage background estimated by a system model of base excision repair
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokhansanj, B A; Wilson, III, D M
Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less
NASA Astrophysics Data System (ADS)
Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng
2018-04-01
The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.
NASA Technical Reports Server (NTRS)
Cotton, W. R.; Tripoli, G. J.
1982-01-01
Observational requirements for predicting convective storm development and intensity as suggested by recent numerical experiments are examined. Recent 3D numerical experiments are interpreted with regard to the relationship between overshooting tops and surface wind gusts. The development of software for emulating satellite inferred cloud properties using 3D cloud model predicted data and the simulation of Heymsfield (1981) Northern Illinois storm are described as well as the development of a conceptual/semi-quantitative model of eastward propagating, mesoscale convective complexes forming to the lee of the Rocky Mountains.
A long term model of circulation. [human body
NASA Technical Reports Server (NTRS)
White, R. J.
1974-01-01
A quantitative approach to modeling human physiological function, with a view toward ultimate application to long duration space flight experiments, was undertaken. Data was obtained on the effect of weightlessness on certain aspects of human physiological function during 1-3 month periods. Modifications in the Guyton model are reviewed. Design considerations for bilateral interface models are discussed. Construction of a functioning whole body model was studied, as well as the testing of the model versus available data.
Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases
Zhang, Hongpo
2018-01-01
Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369
Photosynthetic Control of Atmospheric Carbonyl Sulfide during the Growing Season
NASA Technical Reports Server (NTRS)
Campbell, J. Elliott; Carmichael, Gregory R.; Chai, T.; Mena-Carrasco, M.; Tang, Y.; Blake, D. R.; Blake, N. J.; Vay, Stephanie A.; Collatz, G. James; Baker, I.;
2008-01-01
Climate models incorporate photosynthesis-climate feedbacks, yet we lack robust tools for large-scale assessments of these processes. Recent work suggests that carbonyl sulfide (COS), a trace gas consumed by plants, could provide a valuable constraint on photosynthesis. Here we analyze airborne observations of COS and carbon dioxide concentrations during the growing season over North America with a three-dimensional atmospheric transport model. We successfully modeled the persistent vertical drawdown of atmospheric COS using the quantitative relation between COS and photosynthesis that has been measured in plant chamber experiments. Furthermore, this drawdown is driven by plant uptake rather than other continental and oceanic fluxes in the model. These results provide quantitative evidence that COS gradients in the continental growing season may have broad use as a measurement-based photosynthesis tracer.
Temporal maps and informativeness in associative learning.
Balsam, Peter D; Gallistel, C Randy
2009-02-01
Neurobiological research on learning assumes that temporal contiguity is essential for association formation, but what constitutes temporal contiguity has never been specified. We review evidence that learning depends, instead, on learning a temporal map. Temporal relations between events are encoded even from single experiences. The speed with which an anticipatory response emerges is proportional to the informativeness of the encoded relation between a predictive stimulus or event and the event it predicts. This principle yields a quantitative account of the heretofore undefined, but theoretically crucial, concept of temporal pairing, an account in quantitative accord with surprising experimental findings. The same principle explains the basic results in the cue competition literature, which motivated the Rescorla-Wagner model and most other contemporary models of associative learning. The essential feature of a memory mechanism in this account is its ability to encode quantitative information.
Temporal maps and informativeness in associative learning
Balsam, Peter D; Gallistel, C. Randy
2009-01-01
Neurobiological research on learning assumes that temporal contiguity is essential for association formation, but what constitutes temporal contiguity has never been specified. We review evidence that learning depends, instead, on learning a temporal map. Temporal relations between events are encoded even from single experiences. The speed with which an anticipatory response emerges is proportional to the informativeness of the encoded relation between a predictive stimulus or event and the event it predicts. This principle yields a quantitative account of the heretofore undefined, but theoretically crucial, concept of temporal pairing, an account in quantitative accord with surprising experimental findings. The same principle explains the basic results in the cue competition literature, which motivated the Rescorla–Wagner model and most other contemporary models of associative learning. The essential feature of a memory mechanism in this account is its ability to encode quantitative information. PMID:19136158
Mosley, Garrett L; Nguyen, Phuong; Wu, Benjamin M; Kamei, Daniel T
2016-08-07
The lateral-flow immunoassay (LFA) is a well-established diagnostic technology that has recently seen significant advancements due in part to the rapidly expanding fields of paper diagnostics and paper-fluidics. As LFA-based diagnostics become more complex, it becomes increasingly important to quantitatively determine important parameters during the design and evaluation process. However, current experimental methods for determining these parameters have certain limitations when applied to LFA systems. In this work, we describe our novel methods of combining paper and radioactive measurements to determine nanoprobe molarity, the number of antibodies per nanoprobe, and the forward and reverse rate constants for nanoprobe binding to immobilized target on the LFA test line. Using a model LFA system that detects for the presence of the protein transferrin (Tf), we demonstrate the application of our methods, which involve quantitative experimentation and mathematical modeling. We also compare the results of our rate constant experiments with traditional experiments to demonstrate how our methods more appropriately capture the influence of the LFA environment on the binding interaction. Our novel experimental approaches can therefore more efficiently guide the research process for LFA design, leading to more rapid advancement of the field of paper-based diagnostics.
A quantitative experiment on the fountain effect in superfluid helium
NASA Astrophysics Data System (ADS)
Amigó, M. L.; Herrera, T.; Neñer, L.; Peralta Gavensky, L.; Turco, F.; Luzuriaga, J.
2017-09-01
Superfluid helium, a state of matter existing at low temperatures, shows many remarkable properties. One example is the so called fountain effect, where a heater can produce a jet of helium. This converts heat into mechanical motion; a machine with no moving parts, but working only below 2 K. Allen and Jones first demonstrated the effect in 1938, but their work was basically qualitative. We now present data of a quantitative version of the experiment. We have measured the heat supplied, the temperature and the height of the jet produced. We also develop equations, based on the two-fluid model of superfluid helium, that give a satisfactory fit to the data. The experiment has been performed by advanced undergraduate students in our home institution, and illustrates in a vivid way some of the striking properties of the superfluid state.
Agent-based re-engineering of ErbB signaling: a modeling pipeline for integrative systems biology.
Das, Arya A; Ajayakumar Darsana, T; Jacob, Elizabeth
2017-03-01
Experiments in systems biology are generally supported by a computational model which quantitatively estimates the parameters of the system by finding the best fit to the experiment. Mathematical models have proved to be successful in reverse engineering the system. The data generated is interpreted to understand the dynamics of the underlying phenomena. The question we have sought to answer is that - is it possible to use an agent-based approach to re-engineer a biological process, making use of the available knowledge from experimental and modelling efforts? Can the bottom-up approach benefit from the top-down exercise so as to create an integrated modelling formalism for systems biology? We propose a modelling pipeline that learns from the data given by reverse engineering, and uses it for re-engineering the system, to carry out in-silico experiments. A mathematical model that quantitatively predicts co-expression of EGFR-HER2 receptors in activation and trafficking has been taken for this study. The pipeline architecture takes cues from the population model that gives the rates of biochemical reactions, to formulate knowledge-based rules for the particle model. Agent-based simulations using these rules, support the existing facts on EGFR-HER2 dynamics. We conclude that, re-engineering models, built using the results of reverse engineering, opens up the possibility of harnessing the power pack of data which now lies scattered in literature. Virtual experiments could then become more realistic when empowered with the findings of empirical cell biology and modelling studies. Implemented on the Agent Modelling Framework developed in-house. C ++ code templates available in Supplementary material . liz.csir@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
All you need is shape: Predicting shear banding in sand with LS-DEM
NASA Astrophysics Data System (ADS)
Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.
2018-02-01
This paper presents discrete element method (DEM) simulations with experimental comparisons at multiple length scales-underscoring the crucial role of particle shape. The simulations build on technological advances in the DEM furnished by level sets (LS-DEM), which enable the mathematical representation of the surface of arbitrarily-shaped particles such as grains of sand. We show that this ability to model shape enables unprecedented capture of the mechanics of granular materials across scales ranging from macroscopic behavior to local behavior to particle behavior. Specifically, the model is able to predict the onset and evolution of shear banding in sands, replicating the most advanced high-fidelity experiments in triaxial compression equipped with sequential X-ray tomography imaging. We present comparisons of the model and experiment at an unprecedented level of quantitative agreement-building a one-to-one model where every particle in the more than 53,000-particle array has its own avatar or numerical twin. Furthermore, the boundary conditions of the experiment are faithfully captured by modeling the membrane effect as well as the platen displacement and tilting. The results show a computational tool that can give insight into the physics and mechanics of granular materials undergoing shear deformation and failure, with computational times comparable to those of the experiment. One quantitative measure that is extracted from the LS-DEM simulations that is currently not available experimentally is the evolution of three dimensional force chains inside and outside of the shear band. We show that the rotations on the force chains are correlated to the rotations in stress principal directions.
Lee, Christina; Rowlands, Ingrid J
2015-02-01
To discuss an example of mixed methods in health psychology, involving separate quantitative and qualitative studies of women's mental health in relation to miscarriage, in which the two methods produced different but complementary results, and to consider ways in which the findings can be integrated. We describe two quantitative projects involving statistical analysis of data from 998 young women who had had miscarriages, and 8,083 who had not, across three waves of the Australian Longitudinal Study on Women's Health. We also describe a qualitative project involving thematic analysis of interviews with nine Australian women who had had miscarriages. The quantitative analyses indicate that the main differences between young women who do and do not experience miscarriage relate to social disadvantage (and thus likelihood of relatively early pregnancy) and to a lifestyle that makes pregnancy likely: Once these factors are accounted for, there are no differences in mental health. Further, longitudinal modelling demonstrates that women who have had miscarriages show a gradual increase in mental health over time, with the exception of women with prior diagnoses of anxiety, depression, or both. By contrast, qualitative analysis of the interviews indicates that women who have had miscarriages experience deep emotional responses and a long and difficult process of coming to terms with their loss. A contextual model of resilience provides a possible framework for understanding these apparently disparate results. Considering positive mental health as including the ability to deal constructively with negative life events, and consequent emotional distress, offers a model that distinguishes between poor mental health and the processes of coping with major life events. In the context of miscarriage, women's efforts to struggle with difficult emotions, and search for meaning, can be viewed as pathways to resilience rather than to psychological distress. Statement of contribution What is already known on this subject? Quantitative research shows that women who miscarry usually experience moderate depression and anxiety, which persists for around 6 months. Qualitative research shows that women who miscarry frequently experience deep grief, which can last for years. What does this study add? We consider ways in which these disparate findings might triangulate. The results suggest a need to distinguish between poor mental health and the experience of loss and grief. Adjusting to miscarriage is often emotionally challenging but not always associated with poor mental health. © 2014 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Suzuki, Makoto; Kameda, Toshimasa; Doi, Ayumi; Borisov, Sergey; Babin, Sergey
2018-03-01
The interpretation of scanning electron microscopy (SEM) images of the latest semiconductor devices is not intuitive and requires comparison with computed images based on theoretical modeling and simulations. For quantitative image prediction and geometrical reconstruction of the specimen structure, the accuracy of the physical model is essential. In this paper, we review the current models of electron-solid interaction and discuss their accuracy. We perform the comparison of the simulated results with our experiments of SEM overlay of under-layer, grain imaging of copper interconnect, and hole bottom visualization by angular selective detectors, and show that our model well reproduces the experimental results. Remaining issues for quantitative simulation are also discussed, including the accuracy of the charge dynamics, treatment of beam skirt, and explosive increase in computing time.
How and why does the immunological synapse form? Physical chemistry meets cell biology.
Chakraborty, Arup K
2002-03-05
During T lymphocyte (T cell) recognition of an antigen, a highly organized and specific pattern of membrane proteins forms in the junction between the T cell and the antigen-presenting cell (APC). This specialized cell-cell junction is called the immunological synapse. It is several micrometers large and forms over many minutes. A plethora of experiments are being performed to study the mechanisms that underlie synapse formation and the way in which information transfer occurs across the synapse. The wealth of experimental data that is beginning to emerge must be understood within a mechanistic framework if it is to prove useful in developing modalities to control the immune response. Quantitative models can complement experiments in the quest for such a mechanistic understanding by suggesting experimentally testable hypotheses. Here, a quantitative synapse assembly model is described. The model uses concepts developed in physical chemistry and cell biology and is able to predict the spatiotemporal evolution of cell shape and receptor protein patterns observed during synapse formation. Attention is directed to how the juxtaposition of model predictions and experimental data has led to intriguing hypotheses regarding the role of null and self peptides during synapse assembly, as well as correlations between T cell effector functions and the robustness of synapse assembly. We remark on some ways in which synergistic experiments and modeling studies can improve current models, and we take steps toward a better understanding of information transfer across the T cell-APC junction.
NASA Technical Reports Server (NTRS)
Noever, David A.
1990-01-01
With and without bioconvective pattern formation, a theoretical model predicts growth in light-limited cultures of motile algae. At the critical density for pattern formation, the resulting doubly exponential population curves show an inflection. Such growth corresponds quantitatively to experiments in mechanically unstirred cultures. This attaches survival value to synchronized pattern formation.
Ernst, Marielle; Kriston, Levente; Romero, Javier M.; Frölich, Andreas M.; Jansen, Olav; Fiehler, Jens; Buhk, Jan-Hendrik
2016-01-01
Purpose We sought to develop a standardized curriculum capable of assessing key competencies in Interventional Neuroradiology by the use of models and simulators in an objective, quantitative, and efficient way. In this evaluation we analyzed the associations between the practical experience, theoretical knowledge, and the skills lab performance of interventionalists. Materials and Methods We evaluated the endovascular skills of 26 participants of the Advanced Course in Endovascular Interventional Neuroradiology of the European Society of Neuroradiology with a set of three tasks (aneurysm coiling and thrombectomy in a virtual simulator and placement of an intra-aneurysmal flow disruptor in a flow model). Practical experience was assessed by a survey. Participants completed a written and oral examination to evaluate theoretical knowledge. Bivariate and multivariate analyses were performed. Results In multivariate analysis knowledge of materials and techniques in Interventional Neuroradiology was moderately associated with skills in aneurysm coiling and thrombectomy. Experience in mechanical thrombectomy was moderately associated with thrombectomy skills, while age was negatively associated with thrombectomy skills. We found no significant association between age, sex, or work experience and skills in aneurysm coiling. Conclusion Our study gives an example of how an integrated curriculum for reasonable and cost-effective assessment of key competences of an interventional neuroradiologist could look. In addition to traditional assessment of theoretical knowledge practical skills are measured by the use of endovascular simulators yielding objective, quantitative, and constructive data for the evaluation of the current performance status of participants as well as the evolution of their technical competency over time. PMID:26848840
Three-dimensional drift kinetic response of high- β plasmas in the DIII-D tokamak
Wang, Zhirui R.; Lanctot, Matthew J.; Liu, Y. Q.; ...
2015-04-07
A quantitative interpretation of the experimentally measured high pressure plasma response to externally applied three-dimensional (3D) magnetic field perturbations, across the no-wall Troyon limit, is achieved. The key to success is the self-consistent inclusion of the drift kinetic resonance effects in numerical modeling using the MARS-K code. This resolves an outstanding issue of ideal magneto-hydrodynamic model, which signi cantly over-predicts the plasma induced field ampli fication near the no-wall limit, as compared to experiments. The self-consistent drift kinetic model leads to quantitative agreement not only for the measured 3D field amplitude and toroidal phase, but also for the measured internalmore » 3D displacement of the plasma.« less
NASA Astrophysics Data System (ADS)
Shirley, Rachel Elizabeth
Nuclear power plant (NPP) simulators are proliferating in academic research institutions and national laboratories in response to the availability of affordable, digital simulator platforms. Accompanying the new research facilities is a renewed interest in using data collected in NPP simulators for Human Reliability Analysis (HRA) research. An experiment conducted in The Ohio State University (OSU) NPP Simulator Facility develops data collection methods and analytical tools to improve use of simulator data in HRA. In the pilot experiment, student operators respond to design basis accidents in the OSU NPP Simulator Facility. Thirty-three undergraduate and graduate engineering students participated in the research. Following each accident scenario, student operators completed a survey about perceived simulator biases and watched a video of the scenario. During the video, they periodically recorded their perceived strength of significant Performance Shaping Factors (PSFs) such as Stress. This dissertation reviews three aspects of simulator-based research using the data collected in the OSU NPP Simulator Facility: First, a qualitative comparison of student operator performance to computer simulations of expected operator performance generated by the Information Decision Action Crew (IDAC) HRA method. Areas of comparison include procedure steps, timing of operator actions, and PSFs. Second, development of a quantitative model of the simulator bias introduced by the simulator environment. Two types of bias are defined: Environmental Bias and Motivational Bias. This research examines Motivational Bias--that is, the effect of the simulator environment on an operator's motivations, goals, and priorities. A bias causal map is introduced to model motivational bias interactions in the OSU experiment. Data collected in the OSU NPP Simulator Facility are analyzed using Structural Equation Modeling (SEM). Data include crew characteristics, operator surveys, and time to recognize and diagnose the accident in the scenario. These models estimate how the effects of the scenario conditions are mediated by simulator bias, and demonstrate how to quantify the strength of the simulator bias. Third, development of a quantitative model of subjective PSFs based on objective data (plant parameters, alarms, etc.) and PSF values reported by student operators. The objective PSF model is based on the PSF network in the IDAC HRA method. The final model is a mixed effects Bayesian hierarchical linear regression model. The subjective PSF model includes three factors: The Environmental PSF, the simulator Bias, and the Context. The Environmental Bias is mediated by an operator sensitivity coefficient that captures the variation in operator reactions to plant conditions. The data collected in the pilot experiments are not expected to reflect professional NPP operator performance, because the students are still novice operators. However, the models used in this research and the methods developed to analyze them demonstrate how to consider simulator bias in experiment design and how to use simulator data to enhance the technical basis of a complex HRA method. The contributions of the research include a framework for discussing simulator bias, a quantitative method for estimating simulator bias, a method for obtaining operator-reported PSF values, and a quantitative method for incorporating the variability in operator perception into PSF models. The research demonstrates applications of Structural Equation Modeling and hierarchical Bayesian linear regression models in HRA. Finally, the research demonstrates the benefits of using student operators as a test platform for HRA research.
Aerothermal modeling program, phase 1
NASA Technical Reports Server (NTRS)
Sturgess, G. J.
1983-01-01
The physical modeling embodied in the computational fluid dynamics codes is discussed. The objectives were to identify shortcomings in the models and to provide a program plan to improve the quantitative accuracy. The physical models studied were for: turbulent mass and momentum transport, heat release, liquid fuel spray, and gaseous radiation. The approach adopted was to test the models against appropriate benchmark-quality test cases from experiments in the literature for the constituent flows that together make up the combustor real flow.
Optical observables in stars with non-stationary atmospheres. [fireballs and cepheid models
NASA Technical Reports Server (NTRS)
Hillendahl, R. W.
1980-01-01
Experience gained by use of Cepheid modeling codes to predict the dimensional and photometric behavior of nuclear fireballs is used as a means of validating various computational techniques used in the Cepheid codes. Predicted results from Cepheid models are compared with observations of the continuum and lines in an effort to demonstrate that the atmospheric phenomena in Cepheids are quite complex but that they can be quantitatively modeled.
Student Experiments on the Effects of Dam Removal on the Elwha River
NASA Astrophysics Data System (ADS)
Sandland, T. O.; Grack Nelson, A. L.
2006-12-01
The National Center for Earth Surface Dynamics (NCED) is an NSF funded Science and Technology Center devoted to developing a quantitative, predictive science of the ecological and physical processes that define and shape rivers and river networks. The Science Museum of Minnesota's (SMM) Earthscapes River Restoration classes provide k-12 students, teachers, and the public opportunities to explore NCED concepts and, like NCED scientists, move from a qualitative to a quantitative-based understanding of river systems. During a series of classes, students work with an experimental model of the Elwha River in Washington State to gain an understanding of the processes that define and shape river systems. Currently, two large dams on the Elwha are scheduled for removal to restore salmon habitat. Students design different dam removal scenarios to test and make qualitative observations describing and comparing how the modeled system evolves over time. In a following session, after discussing the ambiguity of the previous session's qualitative data, student research teams conduct a quantitative experiment to collect detailed measurements of the system. Finally, students interpret, critique, and compare the data the groups collected and ultimately develop and advocate a recommendation for the "ideal" dam removal scenario. SMM is currently conducting a formative evaluation of River Restoration classes to improve their educational effectiveness and guide development of an educator's manual. As of August 2006, pre- and post-surveys have been administered to 167 students to gauge student learning and engagement. The surveys have found the program successful in teaching students why scientists use river models and what processes and phenomena are at work in river systems. Most notable is the increase in student awareness of sediment in river systems. A post-visit survey was also administered to 20 teachers who used the models in their classrooms. This survey provided feedback about teachers' experience with the program and will help inform the development of a future educator's manual. All teachers found the program to be effective at providing opportunities for students to make qualitative observations and most (95%) found the program effective at providing students opportunities to make quantitative measurements. A full summary of evaluation results will be shared at the meeting.
ERIC Educational Resources Information Center
Hatt, Sue; Hannan, Andrew; Baxter, Arthur
2005-01-01
This article draws on quantitative and qualitative data from two institutions to compare the student experience of those with and without bursary awards. Using the student life cycle model, the article examines the ways in which bursaries impact on the student experience before they enter the institution, in the early weeks of their studies and as…
ERIC Educational Resources Information Center
McCallister, Zane Gary; McCallister, Gary Loren
1996-01-01
Presents a model experiment for quantifying phagocytosis using earthworm coelomocytes and determining the optimum length of time necessary to obtain maximum phagocytosis. Involves incubating coelomocytes from invertebrates with an antigen, staining the cells, counting the number of antigen particles ingested, and measuring the effect of different…
NASA Astrophysics Data System (ADS)
Chen, Hui; Tan, Chao; Lin, Zan; Wu, Tong
2018-01-01
Milk is among the most popular nutrient source worldwide, which is of great interest due to its beneficial medicinal properties. The feasibility of the classification of milk powder samples with respect to their brands and the determination of protein concentration is investigated by NIR spectroscopy along with chemometrics. Two datasets were prepared for experiment. One contains 179 samples of four brands for classification and the other contains 30 samples for quantitative analysis. Principal component analysis (PCA) was used for exploratory analysis. Based on an effective model-independent variable selection method, i.e., minimal-redundancy maximal-relevance (MRMR), only 18 variables were selected to construct a partial least-square discriminant analysis (PLS-DA) model. On the test set, the PLS-DA model based on the selected variable set was compared with the full-spectrum PLS-DA model, both of which achieved 100% accuracy. In quantitative analysis, the partial least-square regression (PLSR) model constructed by the selected subset of 260 variables outperforms significantly the full-spectrum model. It seems that the combination of NIR spectroscopy, MRMR and PLS-DA or PLSR is a powerful tool for classifying different brands of milk and determining the protein content.
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
Tonges, Mary; Ray, Joel D; Herman, Suzanne; McCann, Meghan
2018-04-01
Patient satisfaction is a key component of healthcare organizations' performance. Providing a consistent, positive patient experience across a system can be challenging. This article describes an organization's approach to achieving this goal by implementing a successful model developed at the flagship academic healthcare center across an 8-hospital system. The Carolina Care at University of North Carolina Health Care initiative has resulted in substantive qualitative and quantitative benefits including higher patient experience scores for both overall rating and nurse communication.
Fielding-Miller, Rebecca; Dunkle, Kristin L; Cooper, Hannah L F; Windle, Michael; Hadley, Craig
2016-01-01
Transactional sex is associated with increased risk of HIV and gender based violence in southern Africa and around the world. However the typical quantitative operationalization, "the exchange of gifts or money for sex," can be at odds with a wide array of relationship types and motivations described in qualitative explorations. To build on the strengths of both qualitative and quantitative research streams, we used cultural consensus models to identify distinct models of transactional sex in Swaziland. The process allowed us to build and validate emic scales of transactional sex, while identifying key informants for qualitative interviews within each model to contextualize women's experiences and risk perceptions. We used logistic and multinomial logistic regression models to measure associations with condom use and social status outcomes. Fieldwork was conducted between November 2013 and December 2014 in the Hhohho and Manzini regions. We identified three distinct models of transactional sex in Swaziland based on 124 Swazi women's emic valuation of what they hoped to receive in exchange for sex with their partners. In a clinic-based survey (n = 406), consensus model scales were more sensitive to condom use than the etic definition. Model consonance had distinct effects on social status for the three different models. Transactional sex is better measured as an emic spectrum of expectations within a relationship, rather than an etic binary relationship type. Cultural consensus models allowed us to blend qualitative and quantitative approaches to create an emicly valid quantitative scale grounded in qualitative context. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.
2008-08-01
A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
NASA Astrophysics Data System (ADS)
Kühn, Michael; Vieth-Hillebrand, Andrea; Wilke, Franziska D. H.
2017-04-01
Black shales are a heterogeneous mixture of minerals, organic matter and formation water and little is actually known about the fluid-rock interactions during hydraulic fracturing and their effects on composition of flowback and produced water. Geochemical simulations have been performed based on the analyses of "real" flowback water samples and artificial stimulation fluids from lab experiments with the aim to set up a chemical process model for shale gas reservoirs. Prediction of flowback water compositions for potential or already chosen sites requires validated and parameterized geochemical models. For the software "Geochemist's Workbench" (GWB) data bases are adapted and amended based on a literature review. Evaluation of the system has been performed in comparison with the results from laboratory experiments. Parameterization was done in regard to field data provided. Finally, reaction path models are applied for quantitative information about the mobility of compounds in specific settings. Our work leads to quantitative estimates of reservoir compounds in the flowback based on calibrations by laboratory experiments. Such information is crucial for the assessment of environmental impacts as well as to estimate human- and ecotoxicological effects of the flowback waters from a variety of natural gas shales. With a comprehensive knowledge about potential composition and mobility of flowback water, selection of water treatment techniques will become easier.
An experimental approach to the fundamental principles of hemodynamics.
Pontiga, Francisco; Gaytán, Susana P
2005-09-01
An experimental model has been developed to give students hands-on experience with the fundamental laws of hemodynamics. The proposed experimental setup is of simple construction but permits the precise measurements of physical variables involved in the experience. The model consists in a series of experiments where different basic phenomena are quantitatively investigated, such as the pressure drop in a long straight vessel and in an obstructed vessel, the transition from laminar to turbulent flow, the association of vessels in vascular networks, or the generation of a critical stenosis. Through these experiments, students acquire a direct appreciation of the importance of the parameters involved in the relationship between pressure and flow rate, thus facilitating the comprehension of more complex problems in hemodynamics.
Using sobol sequences for planning computer experiments
NASA Astrophysics Data System (ADS)
Statnikov, I. N.; Firsov, G. I.
2017-12-01
Discusses the use for research of problems of multicriteria synthesis of dynamic systems method of Planning LP-search (PLP-search), which not only allows on the basis of the simulation model experiments to revise the parameter space within specified ranges of their change, but also through special randomized nature of the planning of these experiments is to apply a quantitative statistical evaluation of influence of change of varied parameters and their pairwise combinations to analyze properties of the dynamic system.Start your abstract here...
NASA Astrophysics Data System (ADS)
Son, Seok-Woo; Han, Bo-Reum; Garfinkel, Chaim I.; Kim, Seo-Yeon; Park, Rokjin; Abraham, N. Luke; Akiyoshi, Hideharu; Archibald, Alexander T.; Butchart, N.; Chipperfield, Martyn P.; Dameris, Martin; Deushi, Makoto; Dhomse, Sandip S.; Hardiman, Steven C.; Jöckel, Patrick; Kinnison, Douglas; Michou, Martine; Morgenstern, Olaf; O’Connor, Fiona M.; Oman, Luke D.; Plummer, David A.; Pozzer, Andrea; Revell, Laura E.; Rozanov, Eugene; Stenke, Andrea; Stone, Kane; Tilmes, Simone; Yamashita, Yousuke; Zeng, Guang
2018-05-01
The Southern Hemisphere (SH) zonal-mean circulation change in response to Antarctic ozone depletion is re-visited by examining a set of the latest model simulations archived for the Chemistry-Climate Model Initiative (CCMI) project. All models reasonably well reproduce Antarctic ozone depletion in the late 20th century. The related SH-summer circulation changes, such as a poleward intensification of westerly jet and a poleward expansion of the Hadley cell, are also well captured. All experiments exhibit quantitatively the same multi-model mean trend, irrespective of whether the ocean is coupled or prescribed. Results are also quantitatively similar to those derived from the Coupled Model Intercomparison Project phase 5 (CMIP5) high-top model simulations in which the stratospheric ozone is mostly prescribed with monthly- and zonally-averaged values. These results suggest that the ozone-hole-induced SH-summer circulation changes are robust across the models irrespective of the specific chemistry-atmosphere-ocean coupling.
Quantitative photoacoustic elasticity and viscosity imaging for cirrhosis detection
NASA Astrophysics Data System (ADS)
Wang, Qian; Shi, Yujiao; Yang, Fen; Yang, Sihua
2018-05-01
Elasticity and viscosity assessments are essential for understanding and characterizing the physiological and pathological states of tissue. In this work, by establishing a photoacoustic (PA) shear wave model, an approach for quantitative PA elasticity imaging based on measurement of the rise time of the thermoelastic displacement was developed. Thus, using an existing PA viscoelasticity imaging method that features a phase delay measurement, quantitative PA elasticity imaging and viscosity imaging can be obtained in a simultaneous manner. The method was tested and validated by imaging viscoelastic agar phantoms prepared at different agar concentrations, and the imaging data were in good agreement with rheometry results. Ex vivo experiments on liver pathological models demonstrated the capability for cirrhosis detection, and the results were consistent with the corresponding histological results. This method expands the scope of conventional PA imaging and has potential to become an important alternative imaging modality.
Multi-scale Modeling of Plasticity in Tantalum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.« less
Quantitative trait nucleotide analysis using Bayesian model selection.
Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D
2005-10-01
Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.
A novel statistical method for quantitative comparison of multiple ChIP-seq datasets.
Chen, Li; Wang, Chi; Qin, Zhaohui S; Wu, Hao
2015-06-15
ChIP-seq is a powerful technology to measure the protein binding or histone modification strength in the whole genome scale. Although there are a number of methods available for single ChIP-seq data analysis (e.g. 'peak detection'), rigorous statistical method for quantitative comparison of multiple ChIP-seq datasets with the considerations of data from control experiment, signal to noise ratios, biological variations and multiple-factor experimental designs is under-developed. In this work, we develop a statistical method to perform quantitative comparison of multiple ChIP-seq datasets and detect genomic regions showing differential protein binding or histone modification. We first detect peaks from all datasets and then union them to form a single set of candidate regions. The read counts from IP experiment at the candidate regions are assumed to follow Poisson distribution. The underlying Poisson rates are modeled as an experiment-specific function of artifacts and biological signals. We then obtain the estimated biological signals and compare them through the hypothesis testing procedure in a linear model framework. Simulations and real data analyses demonstrate that the proposed method provides more accurate and robust results compared with existing ones. An R software package ChIPComp is freely available at http://web1.sph.emory.edu/users/hwu30/software/ChIPComp.html. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Mota, F. L.; Song, Y.; Pereda, J.; Billia, B.; Tourret, D.; Debierre, J.-M.; Trivedi, R.; Karma, A.; Bergeon, N.
2017-08-01
To study the dynamical formation and evolution of cellular and dendritic arrays under diffusive growth conditions, three-dimensional (3D) directional solidification experiments were conducted in microgravity on a model transparent alloy onboard the International Space Station using the Directional Solidification Insert in the DEvice for the study of Critical LIquids and Crystallization. Selected experiments were repeated on Earth under gravity-driven fluid flow to evidence convection effects. Both radial and axial macrosegregation resulting from convection are observed in ground experiments, and primary spacings measured on Earth and microgravity experiments are noticeably different. The microgravity experiments provide unique benchmark data for numerical simulations of spatially extended pattern formation under diffusive growth conditions. The results of 3D phase-field simulations highlight the importance of accurately modeling thermal conditions that strongly influence the front recoil of the interface and the selection of the primary spacing. The modeling predictions are in good quantitative agreements with the microgravity experiments.
Mechanical testing of bones: the positive synergy of finite-element models and in vitro experiments.
Cristofolini, Luca; Schileo, Enrico; Juszczyk, Mateusz; Taddei, Fulvia; Martelli, Saulo; Viceconti, Marco
2010-06-13
Bone biomechanics have been extensively investigated in the past both with in vitro experiments and numerical models. In most cases either approach is chosen, without exploiting synergies. Both experiments and numerical models suffer from limitations relative to their accuracy and their respective fields of application. In vitro experiments can improve numerical models by: (i) preliminarily identifying the most relevant failure scenarios; (ii) improving the model identification with experimentally measured material properties; (iii) improving the model identification with accurately measured actual boundary conditions; and (iv) providing quantitative validation based on mechanical properties (strain, displacements) directly measured from physical specimens being tested in parallel with the modelling activity. Likewise, numerical models can improve in vitro experiments by: (i) identifying the most relevant loading configurations among a number of motor tasks that cannot be replicated in vitro; (ii) identifying acceptable simplifications for the in vitro simulation; (iii) optimizing the use of transducers to minimize errors and provide measurements at the most relevant locations; and (iv) exploring a variety of different conditions (material properties, interface, etc.) that would require enormous experimental effort. By reporting an example of successful investigation of the femur, we show how a combination of numerical modelling and controlled experiments within the same research team can be designed to create a virtuous circle where models are used to improve experiments, experiments are used to improve models and their combination synergistically provides more detailed and more reliable results than can be achieved with either approach singularly.
Quantifying (dis)agreement between direct detection experiments in a halo-independent way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldstein, Brian; Kahlhoefer, Felix, E-mail: brian.feldstein@physics.ox.ac.uk, E-mail: felix.kahlhoefer@physics.ox.ac.uk
We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of darkmore » matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.« less
Julkunen, Petro; Kiviranta, Panu; Wilson, Wouter; Jurvelin, Jukka S; Korhonen, Rami K
2007-01-01
Load-bearing characteristics of articular cartilage are impaired during tissue degeneration. Quantitative microscopy enables in vitro investigation of cartilage structure but determination of tissue functional properties necessitates experimental mechanical testing. The fibril-reinforced poroviscoelastic (FRPVE) model has been used successfully for estimation of cartilage mechanical properties. The model includes realistic collagen network architecture, as shown by microscopic imaging techniques. The aim of the present study was to investigate the relationships between the cartilage proteoglycan (PG) and collagen content as assessed by quantitative microscopic findings, and model-based mechanical parameters of the tissue. Site-specific variation of the collagen network moduli, PG matrix modulus and permeability was analyzed. Cylindrical cartilage samples (n=22) were harvested from various sites of the bovine knee and shoulder joints. Collagen orientation, as quantitated by polarized light microscopy, was incorporated into the finite-element model. Stepwise stress-relaxation experiments in unconfined compression were conducted for the samples, and sample-specific models were fitted to the experimental data in order to determine values of the model parameters. For comparison, Fourier transform infrared imaging and digital densitometry were used for the determination of collagen and PG content in the same samples, respectively. The initial and strain-dependent fibril network moduli as well as the initial permeability correlated significantly with the tissue collagen content. The equilibrium Young's modulus of the nonfibrillar matrix and the strain dependency of permeability were significantly associated with the tissue PG content. The present study demonstrates that modern quantitative microscopic methods in combination with the FRPVE model are feasible methods to characterize the structure-function relationships of articular cartilage.
Assessing Psychodynamic Conflict.
Simmonds, Joshua; Constantinides, Prometheas; Perry, J Christopher; Drapeau, Martin; Sheptycki, Amanda R
2015-09-01
Psychodynamic psychotherapies suggest that symptomatic relief is provided, in part, with the resolution of psychic conflicts. Clinical researchers have used innovative methods to investigate such phenomenon. This article aims to review the literature on quantitative psychodynamic conflict rating scales. An electronic search of the literature was conducted to retrieve quantitative observer-rated scales used to assess conflict noting each measure's theoretical model, information source, and training and clinical experience required. Scales were also examined for levels of reliability and validity. Five quantitative observer-rated conflict scales were identified. Reliability varied from poor to excellent with each measure demonstrating good validity. However a small number of studies and limited links to current conflict theory suggest further clinical research is needed.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Using Weighted Entropy to Rank Chemicals in Quantitative High Throughput Screening Experiments
Shockley, Keith R.
2014-01-01
Quantitative high throughput screening (qHTS) experiments can simultaneously produce concentration-response profiles for thousands of chemicals. In a typical qHTS study, a large chemical library is subjected to a primary screen in order to identify candidate hits for secondary screening, validation studies or prediction modeling. Different algorithms, usually based on the Hill equation logistic model, have been used to classify compounds as active or inactive (or inconclusive). However, observed concentration-response activity relationships may not adequately fit a sigmoidal curve. Furthermore, it is unclear how to prioritize chemicals for follow-up studies given the large uncertainties that often accompany parameter estimates from nonlinear models. Weighted Shannon entropy can address these concerns by ranking compounds according to profile-specific statistics derived from estimates of the probability mass distribution of response at the tested concentration levels. This strategy can be used to rank all tested chemicals in the absence of a pre-specified model structure or the approach can complement existing activity call algorithms by ranking the returned candidate hits. The weighted entropy approach was evaluated here using data simulated from the Hill equation model. The procedure was then applied to a chemical genomics profiling data set interrogating compounds for androgen receptor agonist activity. PMID:24056003
NASA Astrophysics Data System (ADS)
Bautista, Nazan Uludag
2011-06-01
This study investigated the effectiveness of an Early Childhood Education science methods course that focused exclusively on providing various mastery (i.e., enactive, cognitive content, and cognitive pedagogical) and vicarious experiences (i.e., cognitive self-modeling, symbolic modeling, and simulated modeling) in increasing preservice elementary teachers' self-efficacy beliefs. Forty-four preservice elementary teachers participated in the study. Analysis of the quantitative (STEBI-b) and qualitative (informal surveys) data revealed that personal science teaching efficacy and science teaching outcome expectancy beliefs increased significantly over the semester. Enactive mastery, cognitive pedagogical mastery, symbolic modeling, and cognitive self-modeling were the major sources of self-efficacy. This list was followed by cognitive content mastery and simulated modeling. This study has implications for science teacher educators.
Psychophysically based model of surface gloss perception
NASA Astrophysics Data System (ADS)
Ferwerda, James A.; Pellacini, Fabio; Greenberg, Donald P.
2001-06-01
In this paper we introduce a new model of surface appearance that is based on quantitative studies of gloss perception. We use image synthesis techniques to conduct experiments that explore the relationships between the physical dimensions of glossy reflectance and the perceptual dimensions of glossy appearance. The product of these experiments is a psychophysically-based model of surface gloss, with dimensions that are both physically and perceptually meaningful and scales that reflect our sensitivity to gloss variations. We demonstrate that the model can be used to describe and control the appearance of glossy surfaces in synthesis images, allowing prediction of gloss matches and quantification of gloss differences. This work represents some initial steps toward developing psychophyscial models of the goniometric aspects of surface appearance to complement widely-used colorimetric models.
NASA Astrophysics Data System (ADS)
Wang, Pin; Bista, Rajan K.; Khalbuss, Walid E.; Qiu, Wei; Uttam, Shikhar; Staton, Kevin; Zhang, Lin; Brentnall, Teresa A.; Brand, Randall E.; Liu, Yang
2010-11-01
Definitive diagnosis of malignancy is often challenging due to limited availability of human cell or tissue samples and morphological similarity with certain benign conditions. Our recently developed novel technology-spatial-domain low-coherence quantitative phase microscopy (SL-QPM)-overcomes the technical difficulties and enables us to obtain quantitative information about cell nuclear architectural characteristics with nanoscale sensitivity. We explore its ability to improve the identification of malignancy, especially in cytopathologically non-cancerous-appearing cells. We perform proof-of-concept experiments with an animal model of colorectal carcinogenesis-APCMin mouse model and human cytology specimens of colorectal cancer. We show the ability of in situ nanoscale nuclear architectural characteristics in identifying cancerous cells, especially in those labeled as ``indeterminate or normal'' by expert cytopathologists. Our approach is based on the quantitative analysis of the cell nucleus on the original cytology slides without additional processing, which can be readily applied in a conventional clinical setting. Our simple and practical optical microscopy technique may lead to the development of novel methods for early detection of cancer.
Iterative optimization method for design of quantitative magnetization transfer imaging experiments.
Levesque, Ives R; Sled, John G; Pike, G Bruce
2011-09-01
Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.
Building a Database for a Quantitative Model
NASA Technical Reports Server (NTRS)
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
Bergeon, N; Tourret, D; Chen, L; Debierre, J-M; Guérin, R; Ramirez, A; Billia, B; Karma, A; Trivedi, R
2013-05-31
We report results of directional solidification experiments conducted on board the International Space Station and quantitative phase-field modeling of those experiments. The experiments image for the first time in situ the spatially extended dynamics of three-dimensional cellular array patterns formed under microgravity conditions where fluid flow is suppressed. Experiments and phase-field simulations reveal the existence of oscillatory breathing modes with time periods of several 10's of minutes. Oscillating cells are usually noncoherent due to array disorder, with the exception of small areas where the array structure is regular and stable.
PEITH(Θ): perfecting experiments with information theory in Python with GPU support.
Dony, Leander; Mackerodt, Jonas; Ward, Scott; Filippi, Sarah; Stumpf, Michael P H; Liepe, Juliane
2018-04-01
Different experiments provide differing levels of information about a biological system. This makes it difficult, a priori, to select one of them beyond mere speculation and/or belief, especially when resources are limited. With the increasing diversity of experimental approaches and general advances in quantitative systems biology, methods that inform us about the information content that a given experiment carries about the question we want to answer, become crucial. PEITH(Θ) is a general purpose, Python framework for experimental design in systems biology. PEITH(Θ) uses Bayesian inference and information theory in order to derive which experiments are most informative in order to estimate all model parameters and/or perform model predictions. https://github.com/MichaelPHStumpf/Peitho. m.stumpf@imperial.ac.uk or juliane.liepe@mpibpc.mpg.de.
Retrospective Analysis of a Classical Biological Control Programme
USDA-ARS?s Scientific Manuscript database
1. Classical biological control has been a key technology in the management of invasive arthropod pests globally for over 120 years, yet rigorous quantitative evaluations of programme success or failure are rare. Here, I used life table and matrix model analyses, and life table response experiments ...
Fatigue Damage of Collagenous Tissues: Experiment, Modeling and Simulation Studies
Martin, Caitlin; Sun, Wei
2017-01-01
Mechanical fatigue damage is a critical issue for soft tissues and tissue-derived materials, particularly for musculoskeletal and cardiovascular applications; yet, our understanding of the fatigue damage process is incomplete. Soft tissue fatigue experiments are often difficult and time-consuming to perform, which has hindered progress in this area. However, the recent development of soft-tissue fatigue-damage constitutive models has enabled simulation-based fatigue analyses of tissues under various conditions. Computational simulations facilitate highly controlled and quantitative analyses to study the distinct effects of various loading conditions and design features on tissue durability; thus, they are advantageous over complex fatigue experiments. Although significant work to calibrate the constitutive models from fatigue experiments and to validate predictability remains, further development in these areas will add to our knowledge of soft-tissue fatigue damage and will facilitate the design of durable treatments and devices. In this review, the experimental, modeling, and simulation efforts to study collagenous tissue fatigue damage are summarized and critically assessed. PMID:25955007
Simulation Of Combat With An Expert System
NASA Technical Reports Server (NTRS)
Provenzano, J. P.
1989-01-01
Proposed expert system predicts outcomes of combat situations. Called "COBRA", combat outcome based on rules for attrition, system selects rules for mathematical modeling of losses and discrete events in combat according to previous experiences. Used with another software module known as the "Game". Game/COBRA software system, consisting of Game and COBRA modules, provides for both quantitative aspects and qualitative aspects in simulations of battles. COBRA intended for simulation of large-scale military exercises, concepts embodied in it have much broader applicability. In industrial research, knowledge-based system enables qualitative as well as quantitative simulations.
Kim, Byungsuk; Woo, Young-Ah
2018-05-30
In this study the authors developed a real-time Process Analytical Technology (PAT) of a coating process by applying in-line Raman spectroscopy to evaluate the coating weight gain, which is a quantitative analysis of the film coating layer. The wide area illumination (WAI) Raman probe was connected to the pan coater for real-time monitoring of changes in the weight gain of coating layers. Under the proposed in-line Raman scheme, a non-contact, non-destructive analysis was performed using WAI Raman probes with a spot size of 6 mm. The in-line Raman probe maintained a focal length of 250 mm, and a compressed air line was designed to protect the lens surface from spray droplets. The Design of Experiment (DOE) was applied to identify factors affecting the Raman spectra background of laser irradiation. The factors selected for DOE were the strength of compressed air connected to the probe, and the shielding of light by the transparent door connecting the probe to the pan coater. To develop a quantitative model, partial least squares (PLS) models as multivariate calibration were developed based on the three regions showing the specificity of TiO 2 individually or in combination. For the three single peaks (636 cm -1 , 512 cm -1 , 398 cm -1 ), least squares method (LSM) was applied to develop three univariate quantitative analysis models. One of best multivariate quantitative model having a factor of 1 gave the lowest RMSEP of 0.128, 0.129, and 0.125, respectively for prediction batches. When LSM was applied to the single peak at 636 cm -1 , the univariate quantitative model with an R 2 of 0.9863, slope of 0.5851, and y-intercept of 0.8066 had the lowest RMSEP of 0.138, 0.144, and 0.153, respectively for prediction batches. The in-line Raman spectroscopic method for the analysis of coating weight gain was verified by considering system suitability and parameters such as specificity, range, linearity, accuracy, and precision in accordance with ICH Q2 regarding method validation. The proposed in-line Raman spectroscopy can be utilized as a PAT for product quality assurance as it offers real-time monitoring of quantitative changes in coating weight gain and process end-points during the film coating process. Copyright © 2018 Elsevier B.V. All rights reserved.
Mechanochemical models of processive molecular motors
NASA Astrophysics Data System (ADS)
Lan, Ganhui; Sun, Sean X.
2012-05-01
Motor proteins are the molecular engines powering the living cell. These nanometre-sized molecules convert chemical energy, both enthalpic and entropic, into useful mechanical work. High resolution single molecule experiments can now observe motor protein movement with increasing precision. The emerging data must be combined with structural and kinetic measurements to develop a quantitative mechanism. This article describes a modelling framework where quantitative understanding of motor behaviour can be developed based on the protein structure. The framework is applied to myosin motors, with emphasis on how synchrony between motor domains give rise to processive unidirectional movement. The modelling approach shows that the elasticity of protein domains are important in regulating motor function. Simple models of protein domain elasticity are presented. The framework can be generalized to other motor systems, or an ensemble of motors such as muscle contraction. Indeed, for hundreds of myosins, our framework can be reduced to the Huxely-Simmons description of muscle movement in the mean-field limit.
NASA Technical Reports Server (NTRS)
Dum, C. T.
1990-01-01
Particle simulation experiments were used to study the basic physical ingredients needed for building a global model of foreshock wave phenomena. In particular, the generation of Langmuir waves by a gentle bump-on-tail electron distribution is analyzed. It is shown that, with appropriately designed simulations experiments, quasi-linear theory can be quantitatively verified for parameters corresponding to the electron foreshock.
Gaass, Thomas; Schneider, Moritz Jörg; Dietrich, Olaf; Ingrisch, Michael; Dinkel, Julien
2017-04-01
Variability across devices, patients, and time still hinders widespread recognition of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) as quantitative biomarker. The purpose of this work was to introduce and characterize a dedicated microchannel phantom as a model for quantitative DCE-MRI measurements. A perfusable, MR-compatible microchannel network was constructed on the basis of sacrificial melt-spun sugar fibers embedded in a block of epoxy resin. Structural analysis was performed on the basis of light microscopy images before DCE-MRI experiments. During dynamic acquisition the capillary network was perfused with a standard contrast agent injection system. Flow-dependency, as well as inter- and intrascanner reproducibility of the computed DCE parameters were evaluated using a 3.0 T whole-body MRI. Semi-quantitative and quantitative flow-related parameters exhibited the expected proportionality to the set flow rate (mean Pearson correlation coefficient: 0.991, P < 2.5e-5). The volume fraction was approximately independent from changes of the applied flow rate through the phantom. Repeatability and reproducibility experiments yielded maximum intrascanner coefficients of variation (CV) of 4.6% for quantitative parameters. All evaluated parameters were well in the range of known in vivo results for the applied flow rates. The constructed phantom enables reproducible, flow-dependent, contrast-enhanced MR measurements with the potential to facilitate standardization and comparability of DCE-MRI examinations. © 2017 American Association of Physicists in Medicine.
Analysing neutron scattering data using McStas virtual experiments
NASA Astrophysics Data System (ADS)
Udby, L.; Willendrup, P. K.; Knudsen, E.; Niedermayer, Ch.; Filges, U.; Christensen, N. B.; Farhi, E.; Wells, B. O.; Lefmann, K.
2011-04-01
With the intention of developing a new data analysis method using virtual experiments we have built a detailed virtual model of the cold triple-axis spectrometer RITA-II at PSI, Switzerland, using the McStas neutron ray-tracing package. The parameters characterising the virtual instrument were carefully tuned against real experiments. In the present paper we show that virtual experiments reproduce experimentally observed linewidths within 1-3% for a variety of samples. Furthermore we show that the detailed knowledge of the instrumental resolution found from virtual experiments, including sample mosaicity, can be used for quantitative estimates of linewidth broadening resulting from, e.g., finite domain sizes in single-crystal samples.
Quantitative metal magnetic memory reliability modeling for welded joints
NASA Astrophysics Data System (ADS)
Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng
2016-03-01
Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
NASA Astrophysics Data System (ADS)
Tourret, D.; Karma, A.; Clarke, A. J.; Gibbs, P. J.; Imhoff, S. D.
2015-06-01
We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulations and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.
Tourret, D.; Karma, A.; Clarke, A. J.; ...
2015-06-11
We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulationsmore » and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.« less
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
Foran, Paula
2016-01-01
The aim of this research was to determine if guided operating theatre experience in the undergraduate nursing curricula enhanced surgical knowledge and understanding of nursing care provided outside this specialist area in the pre- and post-operative surgical wards. Using quantitative analyses, undergraduate nurses were knowledge tested on areas of pre- and post-operative surgical nursing in their final semester of study. As much learning occurs in nurses' first year of practice, participants were re-tested again after their Graduate Nurse Program/Preceptorship year. Participants' results were compared to the model of operating room education they had participated in to determine if there was a relationship between the type of theatre education they experienced (if any) and their knowledge of surgical ward nursing. Findings revealed undergraduates nurses receiving guided operating theatre experience had a 76% pass rate compared to 56% with non-guided or no experience (p < 0.001). Graduates with guided operating theatre experience as undergraduates or graduate nurses achieved a 100% pass rate compared to 53% with non-guided or no experience (p < 0.001). The research informs us that undergraduate nurses achieve greater learning about surgical ward nursing via guided operating room experience as opposed to surgical ward nursing experience alone. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fan, X.; Chen, L.; Ma, Z.
2010-12-01
Climate downscaling has been an active research and application area in the past several decades focusing on regional climate studies. Dynamical downscaling, in addition to statistical methods, has been widely used in downscaling as the advanced modern numerical weather and regional climate models emerge. The utilization of numerical models enables that a full set of climate variables are generated in the process of downscaling, which are dynamically consistent due to the constraints of physical laws. While we are generating high resolution regional climate, the large scale climate patterns should be retained. To serve this purpose, nudging techniques, including grid analysis nudging and spectral nudging, have been used in different models. There are studies demonstrating the benefit and advantages of each nudging technique; however, the results are sensitive to many factors such as nudging coefficients and the amount of information to nudge to, and thus the conclusions are controversy. While in a companion work of developing approaches for quantitative assessment of the downscaled climate, in this study, the two nudging techniques are under extensive experiments in the Weather Research and Forecasting (WRF) model. Using the same model provides fair comparability. Applying the quantitative assessments provides objectiveness of comparison. Three types of downscaling experiments were performed for one month of choice. The first type is serving as a base whereas the large scale information is communicated through lateral boundary conditions only; the second is using the grid analysis nudging; and the third is using spectral nudging. Emphases are given to the experiments of different nudging coefficients and nudging to different variables in the grid analysis nudging; while in spectral nudging, we focus on testing the nudging coefficients, different wave numbers on different model levels to nudge.
Health impact assessment – A survey on quantifying tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehr, Rainer, E-mail: rainer.fehr@uni-bielefeld.de; Mekel, Odile C.L., E-mail: odile.mekel@lzg.nrw.de; Fintan Hurley, J., E-mail: fintan.hurley@iom-world.org
Integrating human health into prospective impact assessments is known to be challenging. This is true for both approaches: dedicated health impact assessments (HIA) as well as inclusion of health into more general impact assessments. Acknowledging the full range of participatory, qualitative, and quantitative approaches, this study focuses on the latter, especially on computational tools for quantitative health modelling. We conducted a survey among tool developers concerning the status quo of development and availability of such tools; experiences made with model usage in real-life situations; and priorities for further development. Responding toolmaker groups described 17 such tools, most of them beingmore » maintained and reported as ready for use and covering a wide range of topics, including risk & protective factors, exposures, policies, and health outcomes. In recent years, existing models have been improved and were applied in new ways, and completely new models emerged. There was high agreement among respondents on the need to further develop methods for assessment of inequalities and uncertainty. The contribution of quantitative modeling to health foresight would benefit from building joint strategies of further tool development, improving the visibility of quantitative tools and methods, and engaging continuously with actual and potential users. - Highlights: • A survey investigated computational tools for health impact quantification. • Formal evaluation of such tools has been rare. • Handling inequalities and uncertainties are priority areas for further development. • Health foresight would benefit from tool developers and users forming a community. • Joint development strategies across computational tools are needed.« less
Effect of periodic changes of angle of attack on behavior of airfoils
NASA Technical Reports Server (NTRS)
Katzmayr, R
1922-01-01
This report presents the results of a series of experiments, which gave some quantitative results on the effect of periodic changes in the direction of the relative air flow against airfoils. The first series of experiments concerned how the angle of attack of the wing model was changed by causing the latter to oscillate about an axis parallel to the span and at right angles to the air flow. The second series embraced all the experiments in which the direction of the air flow itself was periodically changed.
Structure of marginally jammed polydisperse packings of frictionless spheres
NASA Astrophysics Data System (ADS)
Zhang, Chi; O'Donovan, Cathal B.; Corwin, Eric I.; Cardinaux, Frédéric; Mason, Thomas G.; Möbius, Matthias E.; Scheffold, Frank
2015-03-01
We model the packing structure of a marginally jammed bulk ensemble of polydisperse spheres. To this end we expand on the granocentric model [Clusel et al., Nature (London) 460, 611 (2009), 10.1038/nature08158], explicitly taking into account rattlers. This leads to a relationship between the characteristic parameters of the packing, such as the mean number of neighbors and the fraction of rattlers, and the radial distribution function g (r ) . We find excellent agreement between the model predictions for g (r ) and packing simulations, as well as experiments on jammed emulsion droplets. The observed quantitative agreement opens the path towards a full structural characterization of jammed particle systems for imaging and scattering experiments.
NASA Technical Reports Server (NTRS)
Cotton, W. R.; Tripoli, G. J.
1980-01-01
Major research accomplishments which were achieved during the first year of the grant are summarized. The research concentrated in the following areas: (1) an examination of observational requirements for predicting convective storm development and intensity as suggested by recent numerical experiments; (2) interpretation of recent 3D numerical experiments with regard to the relationship between overshooting tops and surface wind gusts; (3) the development of software for emulating satellite-inferred cloud properties using 3D cloud model predicted data; and (4) the development of a conceptual/semi-quantitative model of eastward propagating, mesoscale convective complexes forming to the lee of the Rocky Mountains.
Ma, Jun; Liu, Lei; Ge, Sai; Xue, Qiang; Li, Jiangshan; Wan, Yong; Hui, Xinminnan
2018-03-01
A quantitative description of aerobic waste degradation is important in evaluating landfill waste stability and economic management. This research aimed to develop a coupling model to predict the degree of aerobic waste degradation. On the basis of the first-order kinetic equation and the law of conservation of mass, we first developed the coupling model of aerobic waste degradation that considered temperature, initial moisture content and air injection volume to simulate and predict the chemical oxygen demand in the leachate. Three different laboratory experiments on aerobic waste degradation were simulated to test the model applicability. Parameter sensitivity analyses were conducted to evaluate the reliability of parameters. The coupling model can simulate aerobic waste degradation, and the obtained simulation agreed with the corresponding results of the experiment. Comparison of the experiment and simulation demonstrated that the coupling model is a new approach to predict aerobic waste degradation and can be considered as the basis for selecting the economic air injection volume and appropriate management in the future.
Quantitative cell biology: the essential role of theory.
Howard, Jonathon
2014-11-05
Quantitative biology is a hot area, as evidenced by the recent establishment of institutes, graduate programs, and conferences with that name. But what is quantitative biology? What should it be? And how can it contribute to solving the big questions in biology? The past decade has seen very rapid development of quantitative experimental techniques, especially at the single-molecule and single-cell levels. In this essay, I argue that quantitative biology is much more than just the quantitation of these experimental results. Instead, it should be the application of the scientific method by which measurement is directed toward testing theories. In this view, quantitative biology is the recognition that theory and models play critical roles in biology, as they do in physics and engineering. By tying together experiment and theory, quantitative biology promises a deeper understanding of underlying mechanisms, when the theory works, or to new discoveries, when it does not. © 2014 Howard. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Measurement of air and VOC vapor fluxes during gas-driven soil remediation: bench-scale experiments.
Kim, Heonki; Kim, Taeyun; Shin, Seungyeop; Annable, Michael D
2012-09-04
In this laboratory study, an experimental method was developed for the quantitative analyses of gas fluxes in soil during advective air flow. One-dimensional column and two- and three-dimensional flow chamber models were used in this study. For the air flux measurement, n-octane vapor was used as a tracer, and it was introduced in the air flow entering the physical models. The tracer (n-octane) in the gas effluent from the models was captured for a finite period of time using a pack of activated carbon, which then was analyzed for the mass of n-octane. The air flux was calculated based on the mass of n-octane captured by the activated carbon and the inflow concentration. The measured air fluxes are in good agreement with the actual values for one- and two-dimensional model experiments. Using both the two- and three-dimensional models, the distribution of the air flux at the soil surface was measured. The distribution of the air flux was found to be affected by the depth of the saturated zone. The flux and flux distribution of a volatile contaminant (perchloroethene) was also measured by using the two-dimensional model. Quantitative information of both air and contaminant flux may be very beneficial for analyzing the performance of gas-driven subsurface remediation processes including soil vapor extraction and air sparging.
Predicting human blood viscosity in silico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedosov, Dmitry A.; Pan, Wenxiao; Caswell, Bruce
2011-07-05
Cellular suspensions such as blood are a part of living organisms and their rheological and flow characteristics determine and affect majority of vital functions. The rheological and flow properties of cell suspensions are determined by collective dynamics of cells, their structure or arrangement, cell properties and interactions. We study these relations for blood in silico using a mesoscopic particle-based method and two different models (multi-scale/low-dimensional) of red blood cells. The models yield accurate quantitative predictions of the dependence of blood viscosity on shear rate and hematocrit. We explicitly model cell aggregation interactions and demonstrate the formation of reversible rouleaux structuresmore » resulting in a tremendous increase of blood viscosity at low shear rates and yield stress, in agreement with experiments. The non-Newtonian behavior of such cell suspensions (e.g., shear thinning, yield stress) is analyzed and related to the suspension’s microstructure, deformation and dynamics of single cells. We provide the flrst quantitative estimates of normal stress differences and magnitude of aggregation forces in blood. Finally, the flexibility of the cell models allows them to be employed for quantitative analysis of a much wider class of complex fluids including cell, capsule, and vesicle suspensions.« less
Predicting plant biomass accumulation from image-derived parameters
Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian
2018-01-01
Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559
Wolverton, Christopher; Hattrick-Simpers, Jason; Mehta, Apurva
2018-01-01
With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, but there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict. PMID:29662953
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Fang; Ward, Logan; Williams, Travis
With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less
Ren, Fang; Ward, Logan; Williams, Travis; ...
2018-04-01
With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less
Mediators and Metaphorical Analysis: A Phenomenological Study of Florida Family Court Mediators
ERIC Educational Resources Information Center
Storrow, Rebecca A.
2012-01-01
Florida family court mediation programs have typically been assessed with quantitative analysis. To understand the complexity of the experience of being a family mediator, it was necessary to explore how mediators practiced through qualitative research. Metaphors have been considered to be representations of mediators' mental models regarding…
Examining the Cultural Validity of a College Student Engagement Survey for Latinos
ERIC Educational Resources Information Center
Hernandez, Ebelia; Mobley, Michael; Coryell, Gayle; Yu, En-Hui; Martinez, Gladys
2013-01-01
Using critical race theory and quantitative criticalist stance, this study examines the construct validity of an engagement survey, "Student Experiences in the Research University" (SERU) for Latino college students through exploratory factor analysis. Results support the principal seven-factor SERU model. However subfactors exhibited…
Factors Influencing Student Achievement in Different Asian American Pacific Islander Cultures
ERIC Educational Resources Information Center
Marsing, Deborah J.
2017-01-01
Asian American Pacific Islander (AAPI) students are often characterized as model minorities. However, AAPI students represent many diverse communities and a wide spectrum of achievement. Each AAPI culture may experience varying levels of biculturalism and acculturation that can influence students' academic success. This quantitative study…
Quantitative Comparisons to Promote Inquiry in the Introductory Physics Lab
ERIC Educational Resources Information Center
Holmes, N. G.; Bonn, D. A.
2015-01-01
In a recent report, the American Association of Physics Teachers has developed an updated set of recommendations for curriculum of undergraduate physics labs. This document focuses on six major themes: constructing knowledge, modeling, designing experiments, developing technical and practical laboratory skills, analyzing and visualizing data, and…
Quantitative analysis of protein-ligand interactions by NMR.
Furukawa, Ayako; Konuma, Tsuyoshi; Yanaka, Saeko; Sugase, Kenji
2016-08-01
Protein-ligand interactions have been commonly studied through static structures of the protein-ligand complex. Recently, however, there has been increasing interest in investigating the dynamics of protein-ligand interactions both for fundamental understanding of the underlying mechanisms and for drug development. NMR is a versatile and powerful tool, especially because it provides site-specific quantitative information. NMR has widely been used to determine the dissociation constant (KD), in particular, for relatively weak interactions. The simplest NMR method is a chemical-shift titration experiment, in which the chemical-shift changes of a protein in response to ligand titration are measured. There are other quantitative NMR methods, but they mostly apply only to interactions in the fast-exchange regime. These methods derive the dissociation constant from population-averaged NMR quantities of the free and bound states of a protein or ligand. In contrast, the recent advent of new relaxation-based experiments, including R2 relaxation dispersion and ZZ-exchange, has enabled us to obtain kinetic information on protein-ligand interactions in the intermediate- and slow-exchange regimes. Based on R2 dispersion or ZZ-exchange, methods that can determine the association rate, kon, dissociation rate, koff, and KD have been developed. In these approaches, R2 dispersion or ZZ-exchange curves are measured for multiple samples with different protein and/or ligand concentration ratios, and the relaxation data are fitted to theoretical kinetic models. It is critical to choose an appropriate kinetic model, such as the two- or three-state exchange model, to derive the correct kinetic information. The R2 dispersion and ZZ-exchange methods are suitable for the analysis of protein-ligand interactions with a micromolar or sub-micromolar dissociation constant but not for very weak interactions, which are typical in very fast exchange. This contrasts with the NMR methods that are used to analyze population-averaged NMR quantities. Essentially, to apply NMR successfully, both the type of experiment and equation to fit the data must be carefully and specifically chosen for the protein-ligand interaction under analysis. In this review, we first explain the exchange regimes and kinetic models of protein-ligand interactions, and then describe the NMR methods that quantitatively analyze these specific interactions. Copyright © 2016 Elsevier B.V. All rights reserved.
A mathematical model of cortical bone remodeling at cellular level under mechanical stimulus
NASA Astrophysics Data System (ADS)
Qin, Qing-Hua; Wang, Ya-Nan
2012-12-01
A bone cell population dynamics model for cortical bone remodeling under mechanical stimulus is developed in this paper. The external experiments extracted from the literature which have not been used in the creation of the model are used to test the validity of the model. Not only can the model compare reasonably well with these experimental results such as the increase percentage of final values of bone mineral content (BMC) and bone fracture energy (BFE) among different loading schemes (which proves the validity of the model), but also predict the realtime development pattern of BMC and BFE, as well as the dynamics of osteoblasts (OBA), osteoclasts (OCA), nitric oxide (NO) and prostaglandin E2 (PGE2) for each loading scheme, which can hardly be monitored through experiment. In conclusion, the model is the first of its kind that is able to provide an insight into the quantitative mechanism of bone remodeling at cellular level by which bone cells are activated by mechanical stimulus in order to start resorption/formation of bone mass. More importantly, this model has laid a solid foundation based on which future work such as systemic control theory analysis of bone remodeling under mechanical stimulus can be investigated. The to-be identified control mechanism will help to develop effective drugs and combined nonpharmacological therapies to combat bone loss pathologies. Also this deeper understanding of how mechanical forces quantitatively interact with skeletal tissue is essential for the generation of bone tissue for tissue replacement purposes in tissue engineering.
Beckett, Kate; Earthy, Sarah; Sleney, Jude; Barnes, Jo; Kellezi, Blerina; Barker, Marcus; Clarkson, Julie; Coffey, Frank; Elder, Georgina; Kendrick, Denise
2014-01-01
Objective To explore views of service providers caring for injured people on: the extent to which services meet patients’ needs and their perspectives on factors contributing to any identified gaps in service provision. Design Qualitative study nested within a quantitative multicentre longitudinal study assessing longer term impact of unintentional injuries in working age adults. Sampling frame for service providers was based on patient-reported service use in the quantitative study, patient interviews and advice of previously injured lay research advisers. Service providers’ views were elicited through semistructured interviews. Data were analysed using thematic analysis. Setting Participants were recruited from a range of settings and services in acute hospital trusts in four study centres (Bristol, Leicester, Nottingham and Surrey) and surrounding areas. Participants 40 service providers from a range of disciplines. Results Service providers described two distinct models of trauma care: an ‘ideal’ model, informed by professional knowledge of the impact of injury and awareness of best models of care, and a ‘real’ model based on the realities of National Health Service (NHS) practice. Participants’ ‘ideal’ model was consistent with standards of high-quality effective trauma care and while there were examples of services meeting the ideal model, ‘real’ care could also be fragmented and inequitable with major gaps in provision. Service provider accounts provide evidence of comprehensive understanding of patients’ needs, awareness of best practice, compassion and research but reveal significant organisational and resource barriers limiting implementation of knowledge in practice. Conclusions Service providers envisage an ‘ideal’ model of trauma care which is timely, equitable, effective and holistic, but this can differ from the care currently provided. Their experiences provide many suggestions for service improvements to bridge the gap between ‘real’ and ‘ideal’ care. Using service provider views to inform service design and delivery could enhance the quality, patient experience and outcomes of care. PMID:25005598
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Lizee, Arnaud
1996-01-01
The object of this work, started in March of 1995, is to approach the problem of determining the transport conditions (and effects of residual acceleration) during the plane-front directional solidification of a tin-bismuth alloy under low gravity conditions. The work involves using a combination of 2- and 3-D numerical models, scaling analyses, 1-D models and the results of ground-based and low-gravity experiments. The experiments conducted in the MEPHISTO furnace facility during the USMP-3 spaceflight which took place earlier this year (22 Feb. - 6 Mar. 1996). This experiment represents an unprecedented opportunity to make a quantitative correlation between residual accelerations and the response of an actual experimental solidification system
Polymer Brushes under High Load
Balko, Suzanne M.; Kreer, Torsten; Costanzo, Philip J.; Patten, Tim E.; Johner, Albert; Kuhl, Tonya L.; Marques, Carlos M.
2013-01-01
Polymer coatings are frequently used to provide repulsive forces between surfaces in solution. After 25 years of design and study, a quantitative model to explain and predict repulsion under strong compression is still lacking. Here, we combine experiments, simulations, and theory to study polymer coatings under high loads and demonstrate a validated model for the repulsive forces, proposing that this universal behavior can be predicted from the polymer solution properties. PMID:23516470
Characteristics and fall experiences of older adults with and without fear of falling outdoors.
Chippendale, Tracy; Lee, Chang Dae
2018-06-01
Using a theoretical model that combines an ecological perspective and Bandura's theory of self-efficacy as a guide, we sought to compare experiences and characteristics of community dwelling older adults with and without concern about falling outdoors. A survey of randomly selected community dwelling older adults across NYC (N = 120) was conducted using the outdoor falls questionnaire. Descriptive quantitative analyses of participant characteristics were conducted for all participants and for those with and without concern about falling outside. Conventional content analysis using two coders was employed to examine outdoor fall experiences for each group. A mixed methods matrix was used to integrate qualitative and quantitative findings. Some participant characteristics were more common among those with a concern about falling outside such as decreased functional status, female gender, and number of prior outdoor falls. As per descriptions of outdoor fall experiences, participants with concern were more likely to report a fall while climbing stairs or stepping up a curb, describe an intrinsic factor as a cause of their fall, use an injury prevention strategy during the fall, sustain a moderate to severe injury, seek medical attention, have had an ambulance called, require help to get up, and describe implementation of a behavioral change after the fall. Differences exist in participant characteristics and outdoor fall experiences of those with and without concern about falling outside. The proposed model can be used to understand fear of falling outdoors and can help to inform the target population and content of intervention programs.
Development and Application of an Analyst Process Model for a Search Task Scenario
2013-12-01
varied experience levels of the users we will be looking at not only testing the new tool, but also understanding the impact on user groups that the...each group using the toolsets to complete search tasks. 2.4 Hypotheses This research effort seeks to test the following hypotheses: H0... quantitative measures: report quality, errors, and cognitive workload. Due to the crossover design of the experiment, these were analyzed by group and within
Weathering profiles in soils and rocks on Earth and Mars
NASA Astrophysics Data System (ADS)
Hausrath, E.; Adcock, C. T.; Bamisile, T.; Baumeister, J. L.; Gainey, S.; Ralston, S. J.; Steiner, M.; Tu, V.
2017-12-01
Interactions of liquid water with rock, soil, or sediments can result in significant chemical and mineralogical changes with depth. These changes can include transformation from one phase to another as well as translocation, addition, and loss of material. The resulting chemical and mineralogical depth profiles can record characteristics of the interacting liquid water such as pH, temperature, duration, and abundance. We use a combined field, laboratory, and modeling approach to interpret the environmental conditions preserved in soils and rocks. We study depth profiles in terrestrial field environments; perform dissolution experiments of primary and secondary phases important in soil environments; and perform numerical modeling to quantitatively interpret weathering environments. In our field studies we have measured time-integrated basaltic mineral dissolution rates, and interpreted the impact of pH and temperature on weathering in basaltic and serpentine-containing rocks and soils. These results help us interpret fundamental processes occurring in soils on Earth and on Mars, and can also be used to inform numerical modeling and laboratory experiments. Our laboratory experiments provide fundamental kinetic data to interpret processes occurring in soils. We have measured dissolution rates of Mars-relevant phosphate minerals, clay minerals, and amorphous phases, as well as dissolution rates under specific Mars-relevant conditions such as in concentrated brines. Finally, reactive transport modeling allows a quantitative interpretation of the kinetic, thermodynamic, and transport processes occurring in soil environments. Such modeling allows the testing of conditions under longer time frames and under different conditions than might be possible under either terrestrial field or laboratory conditions. We have used modeling to examine the weathering of basalt, olivine, carbonate, phosphate, and clay minerals, and placed constraints on the duration, pH, and solution chemistry of past aqueous alteration occurring on Mars.
Quantitative, spectrally-resolved intraoperative fluorescence imaging
Valdés, Pablo A.; Leblond, Frederic; Jacobs, Valerie L.; Wilson, Brian C.; Paulsen, Keith D.; Roberts, David W.
2012-01-01
Intraoperative visual fluorescence imaging (vFI) has emerged as a promising aid to surgical guidance, but does not fully exploit the potential of the fluorescent agents that are currently available. Here, we introduce a quantitative fluorescence imaging (qFI) approach that converts spectrally-resolved data into images of absolute fluorophore concentration pixel-by-pixel across the surgical field of view (FOV). The resulting estimates are linear, accurate, and precise relative to true values, and spectral decomposition of multiple fluorophores is also achieved. Experiments with protoporphyrin IX in a glioma rodent model demonstrate in vivo quantitative and spectrally-resolved fluorescence imaging of infiltrating tumor margins for the first time. Moreover, we present images from human surgery which detect residual tumor not evident with state-of-the-art vFI. The wide-field qFI technique has broad implications for intraoperative surgical guidance because it provides near real-time quantitative assessment of multiple fluorescent biomarkers across the operative field. PMID:23152935
Inductive reasoning about causally transmitted properties.
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B
2008-11-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Magnetic Field Gradient Calibration as an Experiment to Illustrate Magnetic Resonance Imaging
ERIC Educational Resources Information Center
Seedhouse, Steven J.; Hoffmann, Markus M.
2008-01-01
A nuclear magnetic resonance (NMR) spectroscopy experiment for the undergraduate physical chemistry laboratory is described that encompasses both qualitative and quantitative pedagogical goals. Qualitatively, the experiment illustrates how images are obtained in magnetic resonance imaging (MRI). Quantitatively, students experience the…
Calovi, Daniel S.; Litchinko, Alexandra; Lopez, Ugo; Chaté, Hugues; Sire, Clément
2018-01-01
The development of tracking methods for automatically quantifying individual behavior and social interactions in animal groups has open up new perspectives for building quantitative and predictive models of collective behavior. In this work, we combine extensive data analyses with a modeling approach to measure, disentangle, and reconstruct the actual functional form of interactions involved in the coordination of swimming in Rummy-nose tetra (Hemigrammus rhodostomus). This species of fish performs burst-and-coast swimming behavior that consists of sudden heading changes combined with brief accelerations followed by quasi-passive, straight decelerations. We quantify the spontaneous stochastic behavior of a fish and the interactions that govern wall avoidance and the reaction to a neighboring fish, the latter by exploiting general symmetry constraints for the interactions. In contrast with previous experimental works, we find that both attraction and alignment behaviors control the reaction of fish to a neighbor. We then exploit these results to build a model of spontaneous burst-and-coast swimming and interactions of fish, with all parameters being estimated or directly measured from experiments. This model quantitatively reproduces the key features of the motion and spatial distributions observed in experiments with a single fish and with two fish. This demonstrates the power of our method that exploits large amounts of data for disentangling and fully characterizing the interactions that govern collective behaviors in animals groups. PMID:29324853
NASA Astrophysics Data System (ADS)
Cooke, M. L.
2015-12-01
Accretionary sandbox experiments provide a rich environment for investigating the processes of fault development. These experiments engage students because 1) they enable direct observation of fault growth, which is impossible in the crust (type 1 physical model), 2) they are not only representational but can also be manipulated (type 2 physical model), 3) they can be used to test hypotheses (type 3 physical model) and 4) they resemble experiments performed by structural geology researchers around the world. The structural geology courses at UMass Amherst utilize a series of accretionary sandboxes experiments where students first watch a video of an experiment and then perform a group experiment. The experiments motivate discussions of what conditions they would change and what outcomes they would expect from these changes; hypothesis development. These discussions inevitably lead to calculations of the scaling relationships between model and crustal fault growth and provide insight into the crustal processes represented within the dry sand. Sketching of the experiments has been shown to be a very effective assessment method as the students reveal which features they are analyzing. Another approach used at UMass is to set up a forensic experiment. The experiment is set up with spatially varying basal friction before the meeting and students must figure out what the basal conditions are through the experiment. This experiment leads to discussions of equilibrium and force balance within the accretionary wedge. Displacement fields can be captured throughout the experiment using inexpensive digital image correlation techniques to foster quantitative analysis of the experiments.
The Large Area Crop Inventory Experiment /LACIE/ - A summary of three years' experience
NASA Technical Reports Server (NTRS)
Erb, R. B.; Moore, B. H.
1979-01-01
Aims, history and schedule of the Large Area Crop Inventory Experiment (LACIE) conducted by NASA, USDA and NOAA from 1974-1977 are described. The LACIE experiment designed to research, develop, apply and evaluate a technology to monitor wheat production in important regions throughout the world (U.S., Canada, USSR, Brasil) utilized quantitative multispectral data collected by Landsat in concert with current weather data and historical information. The experiment successfully exploited computer data and mathematical models to extract timely corp information. A follow-on activities for the early 1980's is planned focusing especially on the early warning of changes affecting production and quality of renewable resources and commodity production forecast.
Tropospheric Chemistry Studies using Observations from GOME and TOMS
NASA Technical Reports Server (NTRS)
Chance, Kelly; Spurr, Robert J. D.; Kurosu, Thomas P.; Jacob, Daniel J.; Gleason, James F.
2003-01-01
Studies to quantitatively determine trace gas and aerosol amounts from the Global Ozone Monitoring Experiment (GOME) and the Total Ozone Monitoring Experiment (TOMS) and to perform chemical modeling studies which utilize these results are given. This includes: 1. Analysis of measurements from the GOME and TOMS instruments for troposphere distributions of O3 and HCHO; troposphere enhancements of SO2, NO2 and aerosols associated with major sources; and springtime events of elevated BrO in the lower Arctic troposphere. 2. Application of a global 3-dimensional model of troposphere chemistry to interpret the GOME observations in terms of the factors controlling the abundances of troposphere ozone and OH.
NASA Technical Reports Server (NTRS)
DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
NASA Technical Reports Server (NTRS)
DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
Modeling RNA interference in mammalian cells
2011-01-01
Background RNA interference (RNAi) is a regulatory cellular process that controls post-transcriptional gene silencing. During RNAi double-stranded RNA (dsRNA) induces sequence-specific degradation of homologous mRNA via the generation of smaller dsRNA oligomers of length between 21-23nt (siRNAs). siRNAs are then loaded onto the RNA-Induced Silencing multiprotein Complex (RISC), which uses the siRNA antisense strand to specifically recognize mRNA species which exhibit a complementary sequence. Once the siRNA loaded-RISC binds the target mRNA, the mRNA is cleaved and degraded, and the siRNA loaded-RISC can degrade additional mRNA molecules. Despite the widespread use of siRNAs for gene silencing, and the importance of dosage for its efficiency and to avoid off target effects, none of the numerous mathematical models proposed in literature was validated to quantitatively capture the effects of RNAi on the target mRNA degradation for different concentrations of siRNAs. Here, we address this pressing open problem performing in vitro experiments of RNAi in mammalian cells and testing and comparing different mathematical models fitting experimental data to in-silico generated data. We performed in vitro experiments in human and hamster cell lines constitutively expressing respectively EGFP protein or tTA protein, measuring both mRNA levels, by quantitative Real-Time PCR, and protein levels, by FACS analysis, for a large range of concentrations of siRNA oligomers. Results We tested and validated four different mathematical models of RNA interference by quantitatively fitting models' parameters to best capture the in vitro experimental data. We show that a simple Hill kinetic model is the most efficient way to model RNA interference. Our experimental and modeling findings clearly show that the RNAi-mediated degradation of mRNA is subject to saturation effects. Conclusions Our model has a simple mathematical form, amenable to analytical investigations and a small set of parameters with an intuitive physical meaning, that makes it a unique and reliable mathematical tool. The findings here presented will be a useful instrument for better understanding RNAi biology and as modelling tool in Systems and Synthetic Biology. PMID:21272352
Using Qualitative Methods to Understand the Educational Experiences of Students with Dyslexia
ERIC Educational Resources Information Center
Zambo, Debby
2004-01-01
As readers, children with dyslexia are vulnerable to becoming academically, socially, and emotionally detached from education. Traditional educational practices tend to use quantitative measures to diagnose children to better serve their needs and researchers, who study students with special needs often focus on a deficit model that quantify just…
ERIC Educational Resources Information Center
Barnhardt, Cassie L.; Sheets, Jessica E.; Pasquesi, Kira
2015-01-01
This mixed-method analysis presents a model of college students' civic commitments and capacities for community action. Quantitative findings indicate that after controlling for background characteristics, campus contexts, and college experiences, students' acquisitions of commitments to and skills for contributing to the larger community are…
Quantitative determinations using portable Raman spectroscopy.
Navin, Chelliah V; Tondepu, Chaitanya; Toth, Roxana; Lawson, Latevi S; Rodriguez, Jason D
2017-03-20
A portable Raman spectrometer was used to develop chemometric models to determine percent (%) drug release and potency for 500mg ciprofloxacin HCl tablets. Parallel dissolution and chromatographic experiments were conducted alongside Raman experiments to assess and compare the performance and capabilities of portable Raman instruments in determining critical drug attributes. All batches tested passed the 30min dissolution specification and the Raman model for drug release was able to essentially reproduce the dissolution profiles obtained by ultraviolet spectroscopy at 276nm for all five batches of the 500mg ciprofloxacin tablets. The five batches of 500mg ciprofloxacin tablets also passed the potency (assay) specification and the % label claim for the entire set of tablets run were nearly identical, 99.4±5.1 for the portable Raman method and 99.2±1.2 for the chromatographic method. The results indicate that portable Raman spectrometers can be used to perform quantitative analysis of critical product attributes of finished drug products. The findings of this study indicate that portable Raman may have applications in the areas of process analytical technology and rapid pharmaceutical surveillance. Published by Elsevier B.V.
Measuring the visual salience of alignments by their non-accidentalness.
Blusseau, S; Carboni, A; Maiche, A; Morel, J M; Grompone von Gioi, R
2016-09-01
Quantitative approaches are part of the understanding of contour integration and the Gestalt law of good continuation. The present study introduces a new quantitative approach based on the a contrario theory, which formalizes the non-accidentalness principle for good continuation. This model yields an ideal observer algorithm, able to detect non-accidental alignments in Gabor patterns. More precisely, this parameterless algorithm associates with each candidate percept a measure, the Number of False Alarms (NFA), quantifying its degree of masking. To evaluate the approach, we compared this ideal observer with the human attentive performance on three experiments of straight contours detection in arrays of Gabor patches. The experiments showed a strong correlation between the detectability of the target stimuli and their degree of non-accidentalness, as measured by our model. What is more, the algorithm's detection curves were very similar to the ones of human subjects. This fact seems to validate our proposed measurement method as a convenient way to predict the visibility of alignments. This framework could be generalized to other Gestalts. Copyright © 2015 Elsevier Ltd. All rights reserved.
Canik, John M.; Briesemeister, Alexis R.; McLean, Adam G.; ...
2017-05-10
Recent experiments in DIII-D helium plasmas are examined to resolve the role of atomic and molecular physics in major discrepancies between experiment and modeling of dissipative divertor operation. Helium operation removes the complicated molecular processes of deuterium plasmas that are a prime candidate for the inability of standard fluid models to reproduce dissipative divertor operation, primarily the consistent under-prediction of radiated power. Modeling of these experiments shows that the full divertor radiation can be accounted for, but only if measures are taken to ensure that the model reproduces the measured divertor density. Relying on upstream measurements instead results in amore » lower divertor density and radiation than is measured, indicating a need for improved modeling of the connection between the diverter and the upstream scrape-off layer. Furthermore, these results show that fluid models are able to quantitatively describe the divertor-region plasma, including radiative losses, and indicate that efforts to improve the fidelity of the molecular deuterium models are likely to help resolve the discrepancy in radiation for deuterium plasmas.« less
A comparative study of the constitutive models for silicon carbide
NASA Astrophysics Data System (ADS)
Ding, Jow-Lian; Dwivedi, Sunil; Gupta, Yogendra
2001-06-01
Most of the constitutive models for polycrystalline silicon carbide were developed and evaluated using data from either normal plate impact or Hopkinson bar experiments. At ISP, extensive efforts have been made to gain detailed insight into the shocked state of the silicon carbide (SiC) using innovative experimental methods, viz., lateral stress measurements, in-material unloading measurements, and combined compression shear experiments. The data obtained from these experiments provide some unique information for both developing and evaluating material models. In this study, these data for SiC were first used to evaluate some of the existing models to identify their strength and possible deficiencies. Motivated by both the results of this comparative study and the experimental observations, an improved phenomenological model was developed. The model incorporates pressure dependence of strength, rate sensitivity, damage evolution under both tension and compression, pressure confinement effect on damage evolution, stiffness degradation due to damage, and pressure dependence of stiffness. The model developments are able to capture most of the material features observed experimentally, but more work is needed to better match the experimental data quantitatively.
Spatial cognition and navigation
NASA Technical Reports Server (NTRS)
Aretz, Anthony J.
1989-01-01
An experiment that provides data for the development of a cognitive model of pilot flight navigation is described. The experiment characterizes navigational awareness as the mental alignment of two frames of reference: (1) the ego centered reference frame that is established by the forward view out of the cockpit and (2) the world centered reference frame that is established by the aircraft's location on a map. The data support a model involving at least two components: (1) the perceptual encoding of the navigational landmarks and (2) the mental rotation of the map's world reference frame into alignment with the ego centered reference frame. The quantitative relationships of these two factors are provided as possible inputs for a computational model of spatial cognition during flight navigation.
Design and optimization of reverse-transcription quantitative PCR experiments.
Tichopad, Ales; Kitchen, Rob; Riedmaier, Irmgard; Becker, Christiane; Ståhlberg, Anders; Kubista, Mikael
2009-10-01
Quantitative PCR (qPCR) is a valuable technique for accurately and reliably profiling and quantifying gene expression. Typically, samples obtained from the organism of study have to be processed via several preparative steps before qPCR. We estimated the errors of sample withdrawal and extraction, reverse transcription (RT), and qPCR that are introduced into measurements of mRNA concentrations. We performed hierarchically arranged experiments with 3 animals, 3 samples, 3 RT reactions, and 3 qPCRs and quantified the expression of several genes in solid tissue, blood, cell culture, and single cells. A nested ANOVA design was used to model the experiments, and relative and absolute errors were calculated with this model for each processing level in the hierarchical design. We found that intersubject differences became easily confounded by sample heterogeneity for single cells and solid tissue. In cell cultures and blood, the noise from the RT and qPCR steps contributed substantially to the overall error because the sampling noise was less pronounced. We recommend the use of sample replicates preferentially to any other replicates when working with solid tissue, cell cultures, and single cells, and we recommend the use of RT replicates when working with blood. We show how an optimal sampling plan can be calculated for a limited budget. .
Specificity, cross-talk and adaptation in Interferon signaling
NASA Astrophysics Data System (ADS)
Zilman, Anton
Innate immune system is the first line of defense of higher organisms against pathogens. It coordinates the behavior of millions of cells of multiple types, achieved through numerous signaling molecules. This talk focuses on the signaling specificity of a major class of signaling molecules - Type I Interferons - which are also used therapeutically in the treatment of a number of diseases, such as Hepatitis C, multiple sclerosis and some cancers. Puzzlingly, different Interferons act through the same cell surface receptor but have different effects on the target cells. They also exhibit a strange pattern of temporal cross-talk resulting in a serious clinical problem - loss of response to Interferon therapy. We combined mathematical modeling with quantitative experiments to develop a quantitative model of specificity and adaptation in the Interferon signaling pathway. The model resolves several outstanding experimental puzzles and directly affects the clinical use of Type I Interferons in treatment of viral hepatitis and other diseases.
Director gliding in a nematic liquid crystal layer: Quantitative comparison with experiments
NASA Astrophysics Data System (ADS)
Mema, E.; Kondic, L.; Cummings, L. J.
2018-03-01
The interaction between nematic liquid crystals and polymer-coated substrates may lead to slow reorientation of the easy axis (so-called "director gliding") when a prolonged external field is applied. We consider the experimental evidence of zenithal gliding observed by Joly et al. [Phys. Rev. E 70, 050701 (2004), 10.1103/PhysRevE.70.050701] and Buluy et al. [J. Soc. Inf. Disp. 14, 603 (2006), 10.1889/1.2235686] as well as azimuthal gliding observed by S. Faetti and P. Marianelli [Liq. Cryst. 33, 327 (2006), 10.1080/02678290500512227], and we present a simple, physically motivated model that captures the slow dynamics of gliding, both in the presence of an electric field and after the electric field is turned off. We make a quantitative comparison of our model results and the experimental data and conclude that our model explains the gliding evolution very well.
Design-based and model-based inference in surveys of freshwater mollusks
Dorazio, R.M.
1999-01-01
Well-known concepts in statistical inference and sampling theory are used to develop recommendations for planning and analyzing the results of quantitative surveys of freshwater mollusks. Two methods of inference commonly used in survey sampling (design-based and model-based) are described and illustrated using examples relevant in surveys of freshwater mollusks. The particular objectives of a survey and the type of information observed in each unit of sampling can be used to help select the sampling design and the method of inference. For example, the mean density of a sparsely distributed population of mollusks can be estimated with higher precision by using model-based inference or by using design-based inference with adaptive cluster sampling than by using design-based inference with conventional sampling. More experience with quantitative surveys of natural assemblages of freshwater mollusks is needed to determine the actual benefits of different sampling designs and inferential procedures.
Three-dimensional drift kinetic response of high-β plasmas in the DIII-D tokamak.
Wang, Z R; Lanctot, M J; Liu, Y Q; Park, J-K; Menard, J E
2015-04-10
A quantitative interpretation of the experimentally measured high-pressure plasma response to externally applied three-dimensional (3D) magnetic field perturbations, across the no-wall Troyon β limit, is achieved. The self-consistent inclusion of the drift kinetic effects in magnetohydrodynamic (MHD) modeling [Y. Q. Liu et al., Phys. Plasmas 15, 112503 (2008)] successfully resolves an outstanding issue of the ideal MHD model, which significantly overpredicts the plasma-induced field amplification near the no-wall limit, as compared to experiments. The model leads to quantitative agreement not only for the measured field amplitude and toroidal phase but also for the measured internal 3D displacement of the plasma. The results can be important to the prediction of the reliable plasma behavior in advanced fusion devices, such as ITER [K. Ikeda, Nucl. Fusion 47, S1 (2007)].
Smith, Annetta; Beattie, Michelle; Kyle, Richard G
2015-11-01
To develop a model of pre-nursing experience from evaluation of a pre-nursing scholarship for school pupils in Scotland. Action research study. School pupils ( n = 42) completed questionnaire surveys and participated in anecdote circles. Student nurses acting as pupil 'buddies' ( n = 33) participated in focus groups. Descriptive quantitative data and thematic analyses of qualitative data were integrated across cohorts and campuses. Ten recommended components of a model of pre-nursing experience were identified: educational experience of: (1) face-to-face on-campus teaching; (2) hands-on clinical skills sessions; and (3) andragogy, practice exposure to (4) nursing language; (5) nurses' emotional labour; (6) patients' stories; (7) pupils socializing with buddies; (8) buddies planning placement activities; and (9) supporting pupils during placements. Academic attainment was not a central component of the model due to pupils' need to (10) prioritize examined work for further/higher education entry.
Quantitating Antibody Uptake In Vivo: Conditional Dependence on Antigen Expression Levels
Thurber, Greg M.; Weissleder, Ralph
2010-01-01
Purpose Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Procedures Using a cell line with high EpCAM expression and moderate EGFR expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high affinity antibodies. Results As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. Conclusions These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes. PMID:20809210
Quantitating antibody uptake in vivo: conditional dependence on antigen expression levels.
Thurber, Greg M; Weissleder, Ralph
2011-08-01
Antibodies form an important class of cancer therapeutics, and there is intense interest in using them for imaging applications in diagnosis and monitoring of cancer treatment. Despite the expanding body of knowledge describing pharmacokinetic and pharmacodynamic interactions of antibodies in vivo, discrepancies remain over the effect of antigen expression level on tumoral uptake with some reports indicating a relationship between uptake and expression and others showing no correlation. Using a cell line with high epithelial cell adhesion molecule expression and moderate epidermal growth factor receptor expression, fluorescent antibodies with similar plasma clearance were imaged in vivo. A mathematical model and mouse xenograft experiments were used to describe the effect of antigen expression on uptake of these high-affinity antibodies. As predicted by the theoretical model, under subsaturating conditions, uptake of the antibodies in such tumors is similar because localization of both probes is limited by delivery from the vasculature. In a separate experiment, when the tumor is saturated, the uptake becomes dependent on the number of available binding sites. In addition, targeting of small micrometastases is shown to be higher than larger vascularized tumors. These results are consistent with the prediction that high affinity antibody uptake is dependent on antigen expression levels for saturating doses and delivery for subsaturating doses. It is imperative for any probe to understand whether quantitative uptake is a measure of biomarker expression or transport to the region of interest. The data provide support for a predictive theoretical model of antibody uptake, enabling it to be used as a starting point for the design of more efficacious therapies and timely quantitative imaging probes.
Perrin, Christian L; Tardy, Philippe M J; Sorbie, Ken S; Crawshaw, John C
2006-03-15
The in situ rheology of polymeric solutions has been studied experimentally in etched silicon micromodels which are idealizations of porous media. The rectangular channels in these etched networks have dimensions typical of pore sizes in sandstone rocks. Pressure drop/flow rate relations have been measured for water and non-Newtonian hydrolyzed-polyacrylamide (HPAM) solutions in both individual straight rectangular capillaries and in networks of such capillaries. Results from these experiments have been analyzed using pore-scale network modeling incorporating the non-Newtonian fluid mechanics of a Carreau fluid. Quantitative agreement is seen between the experiments and the network calculations in the Newtonian and shear-thinning flow regions demonstrating that the 'shift factor,'alpha, can be calculated a priori. Shear-thickening behavior was observed at higher flow rates in the micromodel experiments as a result of elastic effects becoming important and this remains to be incorporated in the network model.
Mapping quantitative trait loci for binary trait in the F2:3 design.
Zhu, Chengsong; Zhang, Yuan-Ming; Guo, Zhigang
2008-12-01
In the analysis of inheritance of quantitative traits with low heritability, an F(2:3) design that genotypes plants in F(2) and phenotypes plants in F(2:3) progeny is often used in plant genetics. Although statistical approaches for mapping quantitative trait loci (QTL) in the F(2:3) design have been well developed, those for binary traits of biological interest and economic importance are seldom addressed. In this study, an attempt was made to map binary trait loci (BTL) in the F(2:3) design. The fundamental idea was: the F(2) plants were genotyped, all phenotypic values of each F(2:3) progeny were measured for binary trait, and these binary trait values and the marker genotype informations were used to detect BTL under the penetrance and liability models. The proposed method was verified by a series of Monte-Carlo simulation experiments. These results showed that maximum likelihood approaches under the penetrance and liability models provide accurate estimates for the effects and the locations of BTL with high statistical power, even under of low heritability. Moreover, the penetrance model is as efficient as the liability model, and the F(2:3) design is more efficient than classical F(2) design, even though only a single progeny is collected from each F(2:3) family. With the maximum likelihood approaches under the penetrance and the liability models developed in this study, we can map binary traits as we can do for quantitative trait in the F(2:3) design.
NASA Technical Reports Server (NTRS)
Wilcox, W. R.; Subramanian, R. S.; Meyyappan, M.; Smith, H. D.; Mattox, D. M.; Partlow, D. P.
1981-01-01
Thermal fining, thermal migration of bubbles under reduced gravity conditions, and data to verify current theoretical models of bubble location and temperatures as a function of time are discussed. A sample, sodium borate glass, was tested during 5 to 6 minutes of zero gravity during rocket flight. The test cell contained a heater strip; thermocouples were in the sample. At present quantitative data are insufficient to confirm results of theoretical calculations.
Searle, Brian C.; Egertson, Jarrett D.; Bollinger, James G.; Stergachis, Andrew B.; MacCoss, Michael J.
2015-01-01
Targeted mass spectrometry is an essential tool for detecting quantitative changes in low abundant proteins throughout the proteome. Although selected reaction monitoring (SRM) is the preferred method for quantifying peptides in complex samples, the process of designing SRM assays is laborious. Peptides have widely varying signal responses dictated by sequence-specific physiochemical properties; one major challenge is in selecting representative peptides to target as a proxy for protein abundance. Here we present PREGO, a software tool that predicts high-responding peptides for SRM experiments. PREGO predicts peptide responses with an artificial neural network trained using 11 minimally redundant, maximally relevant properties. Crucial to its success, PREGO is trained using fragment ion intensities of equimolar synthetic peptides extracted from data independent acquisition experiments. Because of similarities in instrumentation and the nature of data collection, relative peptide responses from data independent acquisition experiments are a suitable substitute for SRM experiments because they both make quantitative measurements from integrated fragment ion chromatograms. Using an SRM experiment containing 12,973 peptides from 724 synthetic proteins, PREGO exhibits a 40–85% improvement over previously published approaches at selecting high-responding peptides. These results also represent a dramatic improvement over the rules-based peptide selection approaches commonly used in the literature. PMID:26100116
Check-Up of Planet Earth at the Turn of the Millennium: Anticipated New Phase in Earth Sciences
NASA Technical Reports Server (NTRS)
Kaufman, Y. J.; Ramanathan, V.
1998-01-01
Langley's remarkable solar and lunar spectra collected from Mt. Whitney inspired Arrhenius to develop the first quantitative climate model in 1896. In 1999, NASA's Earth Observing AM Satellite (EOS-AM) will repeat Langley's experiment, but for the entire planet, thus pioneering calibrated spectral observations from space. Conceived in response to real environmental problems, EOS-AM, in conjunction with other international satellite efforts, will fill a major gap in current efforts by providing quantitative global data sets with a resolution of few kilometers on the physical, chemical and biological elements of the earth system. Thus, like Langley's data, EOS-AM can revolutionize climate research by inspiring a new generation of climate system models and enable us to assess the human impact on the environment.
Agent-based modeling: case study in cleavage furrow models
Mogilner, Alex; Manhart, Angelika
2016-01-01
The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328
Mitigating direct detection bounds in non-minimal Higgs portal scalar dark matter models
NASA Astrophysics Data System (ADS)
Bhattacharya, Subhaditya; Ghosh, Purusottam; Maity, Tarak Nath; Ray, Tirtha Sankar
2017-10-01
The minimal Higgs portal dark matter model is increasingly in tension with recent results form direct detection experiments like LUX and XENON. In this paper we make a systematic study of simple extensions of the Z_2 stabilized singlet scalar Higgs portal scenario in terms of their prospects at direct detection experiments. We consider both enlarging the stabilizing symmetry to Z_3 and incorporating multipartite features in the dark sector. We demonstrate that in these non-minimal models the interplay of annihilation, co-annihilation and semi-annihilation processes considerably relax constraints from present and proposed direct detection experiments while simultaneously saturating observed dark matter relic density. We explore in particular the resonant semi-annihilation channel within the multipartite Z_3 framework which results in new unexplored regions of parameter space that would be difficult to constrain by direct detection experiments in the near future. The role of dark matter exchange processes within multi-component Z_3× Z_3^' } framework is illustrated. We make quantitative estimates to elucidate the role of various annihilation processes in the different allowed regions of parameter space within these models.
Parent-identified barriers to pediatric health care: a process-oriented model.
Sobo, Elisa J; Seid, Michael; Reyes Gelhard, Leticia
2006-02-01
To further understand barriers to care as experienced by health care consumers, and to demonstrate the importance of conjoining qualitative and quantitative health services research. Transcripts from focus groups conducted in San Diego with English- and Spanish-speaking parents of children with special health care needs. Participants were asked about the barriers to care they had experienced or perceived, and their strategies for overcoming these barriers. Using elementary anthropological discourse analysis techniques, a process-based conceptual model of the parent experience was devised. The analysis revealed a parent-motivated model of barriers to care that enriched our understanding of quantitative findings regarding the population from which the focus group sample was drawn. Parent-identified barriers were grouped into the following six temporally and spatially sequenced categories: necessary skills and prerequisites for gaining access to the system; realizing access once it is gained; front office experiences; interactions with physicians; system arbitrariness and fragmentation; outcomes that affect future interaction with the system. Key to the successful navigation of the system was parents' functional biomedical acculturation; this construct likens the biomedical health services system to a cultural system within which all parents/patients must learn to function competently. Qualitative analysis of focus group data enabled a deeper understanding of barriers to care--one that went beyond the traditional association of marker variables with poor outcomes ("what") to reveal an understanding of the processes by which parents experience the health care system ("how,"why") and by which disparities may arise. Development of such process-oriented models furthers the provision of patient-centered care and the creation of interventions, programs, and curricula to enhance such care. Qualitative discourse analysis, for example using this project's widely applicable protocol for generating experientially based models, can enhance our knowledge of the parent/patient experience and aid in the development of more powerful conceptualizations of key health care constructs.
NASA Astrophysics Data System (ADS)
Paillet, Frederick
2012-08-01
A simple mass-balance code allows effective modeling of conventional fluid column resistivity logs in dilution tests involving column replacement with either distilled water or dilute brine. Modeling a series of column profiles where the inflowing formation water introduces water quality interfaces propagating along the borehole gives effective estimates of the rate of borehole flow. Application of the dilution model yields estimates of borehole flow rates that agree with measurements made with the heat-pulse flowmeter under ambient and pumping conditions. Model dilution experiments are used to demonstrate how dilution logging can extend the range of borehole flow measurement at least an order of magnitude beyond that achieved with flowmeters. However, dilution logging has the same dynamic range limitation encountered with flowmeters because it is difficult to detect and characterize flow zones that contribute a small fraction of total flow when that contribution is superimposed on a larger flow. When the smaller contribution is located below the primary zone, ambient downflow may disguise the zone if pumping is not strong enough to reverse the outflow. This situation can be addressed by increased pumping. But this is likely to make the moveout of water quality interfaces too fast to measure in the upper part of the borehole, so that a combination of flowmeter and dilution method may be more appropriate. Numerical experiments show that the expected weak horizontal flow across the borehole at conductive zones would be almost impossible to recognize if any ambient vertical flow is present. In situations where natural water quality differences occur such as flowing boreholes or injection experiments, the simple mass-balance code can be used to quantitatively model the evolution of fluid column logs. Otherwise, dilution experiments can be combined with high-resolution flowmeter profiles to obtain results not attainable using either method alone.
NASA Astrophysics Data System (ADS)
Schmitz, R.; Yordanov, S.; Butt, H. J.; Koynov, K.; Dünweg, B.
2011-12-01
Total internal reflection fluorescence cross-correlation spectroscopy (TIR-FCCS) has recently [S. Yordanov , Optics ExpressOPEXFF1094-408710.1364/OE.17.021149 17, 21149 (2009)] been established as an experimental method to probe hydrodynamic flows near surfaces, on length scales of tens of nanometers. Its main advantage is that fluorescence occurs only for tracer particles close to the surface, thus resulting in high sensitivity. However, the measured correlation functions provide only rather indirect information about the flow parameters of interest, such as the shear rate and the slip length. In the present paper, we show how to combine detailed and fairly realistic theoretical modeling of the phenomena by Brownian dynamics simulations with accurate measurements of the correlation functions, in order to establish a quantitative method to retrieve the flow properties from the experiments. First, Brownian dynamics is used to sample highly accurate correlation functions for a fixed set of model parameters. Second, these parameters are varied systematically by means of an importance-sampling Monte Carlo procedure in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for massively parallel computers, which allows us to do the data analysis within moderate computing times. The method is applied to flow near a hydrophilic surface, where the slip length is observed to be smaller than 10nm, and, within the limitations of the experiments and the model, indistinguishable from zero.
Zavos, Helena M S; Freeman, Daniel; Haworth, Claire M A; McGuire, Philip; Plomin, Robert; Cardno, Alastair G; Ronald, Angelica
2014-09-01
The onset of psychosis is usually preceded by psychotic experiences (PE). Little is known about the etiology of PE and whether the degree of genetic and environmental influences varies across different levels of severity. A recognized challenge is to identify individuals at high risk of developing psychotic disorders prior to disease onset. To investigate the degree of genetic and environmental influences on specific PE, assessed dimensionally, in adolescents in the community and in those who have many, frequent experiences (defined using quantitative cutoffs). We also assessed the degree of overlap in etiological influences between specific PE. Structural equation model-fitting, including univariate and bivariate twin models, liability threshold models, DeFries-Fulker extremes analysis, and the Cherny method, was used to analyze a representative community sample of 5059 adolescent twin pairs (mean [SD] age, 16.31 [0.68] years) from England and Wales. Psychotic experiences assessed as quantitative traits (self-rated paranoia, hallucinations, cognitive disorganization, grandiosity, and anhedonia, as well as parent-rated negative symptoms). Genetic influences were apparent for all PE (15%-59%), with modest shared environment for hallucinations and negative symptoms (17%-24%) and significant nonshared environment (49%-64%) for the self-rated scales and 17% for parent-rated negative symptoms. Three empirical approaches converged to suggest that the etiology in extreme-scoring groups (most extreme scoring: 5%, 10%, and 15%) did not differ significantly from that of the whole distribution. There was no linear change in heritability across the distribution of PE, with the exception of a modest increase in heritability for increasing severity of parent-rated negative symptoms. Of the PE that showed covariation, this appeared to be due to shared genetic influences (bivariate heritabilities, 0.54-0.71). These findings are consistent with the concept of a psychosis continuum, suggesting that the same genetic and environmental factors influence both extreme, frequent PE and milder, less frequent manifestations in adolescents. Individual PE in adolescence, assessed quantitatively, have lower heritability estimates and higher estimates of nonshared environment than those for the liability to schizophrenia. Heritability varies by type of PE, being highest for paranoia and parent-rated negative symptoms and lowest for hallucinations.
Interface Pattern Selection in Directional Solidification
NASA Technical Reports Server (NTRS)
Trivedi, Rohit; Tewari, Surendra N.
2001-01-01
The central focus of this research is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. Ground-based studies have established that the conditions under which cellular and dendritic microstructures form are precisely where convection effects are dominant in bulk samples. Thus, experimental data can not be obtained terrestrially under pure diffusive regime. Furthermore, reliable theoretical models are not yet possible which can quantitatively incorporate fluid flow in the pattern selection criterion. Consequently, microgravity experiments on cellular and dendritic growth are designed to obtain benchmark data under diffusive growth conditions that can be quantitatively analyzed and compared with the rigorous theoretical model to establish the fundamental principles that govern the selection of specific microstructure and its length scales. In the cellular structure, different cells in an array are strongly coupled so that the cellular pattern evolution is controlled by complex interactions between thermal diffusion, solute diffusion and interface effects. These interactions give infinity of solutions, and the system selects only a narrow band of solutions. The aim of this investigation is to obtain benchmark data and develop a rigorous theoretical model that will allow us to quantitatively establish the physics of this selection process.
Range and energetics of charge hopping in organic semiconductors
NASA Astrophysics Data System (ADS)
Abdalla, Hassan; Zuo, Guangzheng; Kemerink, Martijn
2017-12-01
The recent upswing in attention for the thermoelectric properties of organic semiconductors (OSCs) adds urgency to the need for a quantitative description of the range and energetics of hopping transport in organic semiconductors under relevant circumstances, i.e., around room temperature (RT). In particular, the degree to which hops beyond the nearest neighbor must be accounted for at RT is still largely unknown. Here, measurements of charge and energy transport in doped OSCs are combined with analytical modeling to reach the univocal conclusion that variable-range hopping is the proper description in a large class of disordered OSC at RT. To obtain quantitative agreement with experiment, one needs to account for the modification of the density of states by ionized dopants. These Coulomb interactions give rise to a deep tail of trap states that is independent of the material's initial energetic disorder. Insertion of this effect into a classical Mott-type variable-range hopping model allows one to give a quantitative description of temperature-dependent conductivity and thermopower measurements on a wide range of disordered OSCs. In particular, the model explains the commonly observed quasiuniversal power-law relation between the Seebeck coefficient and the conductivity.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
Helfer, Peter; Shultz, Thomas R
2014-12-01
The widespread availability of calorie-dense food is believed to be a contributing cause of an epidemic of obesity and associated diseases throughout the world. One possible countermeasure is to empower consumers to make healthier food choices with useful nutrition labeling. An important part of this endeavor is to determine the usability of existing and proposed labeling schemes. Here, we report an experiment on how four different labeling schemes affect the speed and nutritional value of food choices. We then apply decision field theory, a leading computational model of human decision making, to simulate the experimental results. The psychology experiment shows that quantitative, single-attribute labeling schemes have greater usability than multiattribute and binary ones, and that they remain effective under moderate time pressure. The computational model simulates these psychological results and provides explanatory insights into them. This work shows how experimental psychology and computational modeling can contribute to the evaluation and improvement of nutrition-labeling schemes. © 2014 New York Academy of Sciences.
Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K
2015-04-01
Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.
Lack of blinding of outcome assessors in animal model experiments implies risk of observer bias.
Bello, Segun; Krogsbøll, Lasse T; Gruber, Jan; Zhao, Zhizhuang J; Fischer, Doris; Hróbjartsson, Asbjørn
2014-09-01
To examine the impact of not blinding outcome assessors on estimates of intervention effects in animal experiments modeling human clinical conditions. We searched PubMed, Biosis, Google Scholar, and HighWire Press and included animal model experiments with both blinded and nonblinded outcome assessors. For each experiment, we calculated the ratio of odds ratios (ROR), that is, the odds ratio (OR) from nonblinded assessments relative to the corresponding OR from blinded assessments. We standardized the ORs according to the experimental hypothesis, such that an ROR <1 indicates that nonblinded assessor exaggerated intervention effect, that is, exaggerated benefit in experiments investigating possible benefit or exaggerated harm in experiments investigating possible harm. We pooled RORs with inverse variance random-effects meta-analysis. We included 10 (2,450 animals) experiments in the main meta-analysis. Outcomes were subjective in most experiments. The pooled ROR was 0.41 (95% confidence interval [CI], 0.20, 0.82; I(2) = 75%; P < 0.001), indicating an average exaggeration of the nonblinded ORs by 59%. The heterogeneity was quantitative and caused by three pesticides experiments with very large observer bias, pooled ROR was 0.20 (95% CI, 0.07, 0.59) in contrast to the pooled ROR in the other seven experiments, 0.82 (95% CI, 0.57, 1.17). Lack of blinding of outcome assessors in animal model experiments with subjective outcomes implies a considerable risk of observer bias. Copyright © 2014 Elsevier Inc. All rights reserved.
Adams, P C; Rickert, D E
1996-11-01
We tested the hypothesis that the small intestine is capable of the first-pass, reductive metabolism of xenobiotics. A simplified version of the isolated vascularly perfused rat small intestine was developed to test this hypothesis with 1,3-dinitrobenzene (1,3-DNB) as a model xenobiotic. Both 3-nitroaniline (3-NA) and 3-nitroacetanilide (3-NAA) were formed and absorbed following intralumenal doses of 1,3-DNB (1.8 or 4.2 mumol) to isolated vascularly perfused rat small intestine. Dose, fasting, or antibiotic pretreatment had no effect on the absorption and metabolism of 1,3-DNB in this model system. The failure of antibiotic pretreatment to alter the metabolism of 1,3-DNA indicated that 1,3-DNB metabolism was mammalian rather than microfloral in origin. All data from experiments initiated with lumenal 1,3-DNB were fit to a pharmacokinetic model (model A). ANOVA analysis revealed that dose, fasting, or antibiotic pretreatment had no statistically significant effect on the model-dependent parameters. 3-NA (1.5 mumol) was administered to the lumen of isolated vascularly perfused rat small intestine to evaluate model A predictions for the absorption and metabolism of this metabolite. All data from experiments initiated with 3-NA were fit to a pharmacokinetic model (model B). Comparison of corresponding model-dependent pharmacokinetic parameters (i.e. those parameters which describe the same processes in models A and B) revealed quantitative differences. Evidence for significant quantitative differences in the pharmacokinetics or metabolism of formed versus preformed 3-NA in rat small intestine may require better definition of the rate constants used to describe tissue and lumenal processes or identification and incorporation of the remaining unidentified metabolites into the models.
ERIC Educational Resources Information Center
Campbell, Corbin M.
2017-01-01
This article describes quantitative observation as a method for understanding college educational experiences. Quantitative observation has been used widely in several fields and in K-12 education, but has had limited application to research in higher education and student affairs to date. The article describes the central tenets of quantitative…
Ning, Shaoyang; Xu, Hongquan; Al-Shyoukh, Ibrahim; Feng, Jiaying; Sun, Ren
2014-10-30
Combination chemotherapy with multiple drugs has been widely applied to cancer treatment owing to enhanced efficacy and reduced drug resistance. For drug combination experiment analysis, response surface modeling has been commonly adopted. In this paper, we introduce a Hill-based global response surface model and provide an application of the model to a 512-run drug combination experiment with three chemicals, namely AG490, U0126, and indirubin-3 ' -monoxime (I-3-M), on lung cancer cells. The results demonstrate generally improved goodness of fit of our model from the traditional polynomial model, as well as the original Hill model on the basis of fixed-ratio drug combinations. We identify different dose-effect patterns between normal and cancer cells on the basis of our model, which indicates the potential effectiveness of the drug combination in cancer treatment. Meanwhile, drug interactions are analyzed both qualitatively and quantitatively. The distinct interaction patterns between U0126 and I-3-M on two types of cells uncovered by the model could be a further indicator of the efficacy of the drug combination. Copyright © 2014 John Wiley & Sons, Ltd.
Assessing healthcare professionals' experiences of integrated care: do surveys tell the full story?
Stephenson, Matthew D; Campbell, Jared M; Lisy, Karolina; Aromataris, Edoardo C
2017-09-01
Integrated care is the combination of different healthcare services with the goal to provide comprehensive, seamless, effective and efficient patient care. Assessing the experiences of healthcare professionals (HCPs) is an important aspect when evaluating integrated care strategies. The aim of this rapid review was to investigate if quantitative surveys used to assess HCPs' experiences with integrated care capture all the aspects highlighted as being important in qualitative research, with a view to informing future survey development. The review considered all types of health professionals in primary care, and hospital and specialist services, with a specific focus on the provision of integrated care aimed at improving the patient journey. PubMed, CINAHL and grey literature sources were searched for relevant surveys/program evaluations and qualitative research studies. Full text articles deemed to be of relevance to the review were appraised for methodological quality using abridged critical appraisal instruments from the Joanna Briggs Institute. Data were extracted from included studies using standardized data extraction templates. Findings from included studies were grouped into domains based on similarity of meaning. Similarities and differences in the domains covered in quantitative surveys and those identified as being important in qualitative research were explored. A total of 37 studies (19 quantitative surveys, 14 qualitative studies and four mixed-method studies) were included in the review. A range of healthcare professions participated in the included studies, the majority being primary care providers. Common domains identified from quantitative surveys and qualitative studies included Communication, Agreement on Clear Roles and Responsibilities, Facilities, Information Systems, and Coordination of Care and Access. Qualitative research highlighted domains identified by HCPs as being relevant to their experiences with integrated care that have not routinely being surveyed, including Workload, Clear Leadership/Decision-Making, Management, Flexibility of Integrated Care Model, Engagement, Usefulness of Integrated Care and Collaboration, and Positive Impact/Clinical Benefits/Practice Level Benefits. There were several domains identified from qualitative research that are not routinely included in quantitative surveys to assess health professionals' experiences of integrated care. In addition, the qualitative findings suggest that the experiences of HCPs are often impacted by deeper aspects than those measured by existing surveys. Incorporation of targeted items within these domains in the design of surveys should enhance the capture of data that are relevant to the experiences of HCPs with integrated care, which may assist in more comprehensive evaluation and subsequent improvement of integrated care programs.
Modeling aeolian dune and dune field evolution
NASA Astrophysics Data System (ADS)
Diniega, Serina
Aeolian sand dune morphologies and sizes are strongly connected to the environmental context and physical processes active since dune formation. As such, the patterns and measurable features found within dunes and dune fields can be interpreted as records of environmental conditions. Using mathematical models of dune and dune field evolution, it should be possible to quantitatively predict dune field dynamics from current conditions or to determine past field conditions based on present-day observations. In this dissertation, we focus on the construction and quantitative analysis of a continuum dune evolution model. We then apply this model towards interpretation of the formative history of terrestrial and martian dunes and dune fields. Our first aim is to identify the controls for the characteristic lengthscales seen in patterned dune fields. Variations in sand flux, binary dune interactions, and topography are evaluated with respect to evolution of individual dunes. Through the use of both quantitative and qualitative multiscale models, these results are then extended to determine the role such processes may play in (de)stabilization of the dune field. We find that sand flux variations and topography generally destabilize dune fields, while dune collisions can yield more similarly-sized dunes. We construct and apply a phenomenological macroscale dune evolution model to then quantitatively demonstrate how dune collisions cause a dune field to evolve into a set of uniformly-sized dunes. Our second goal is to investigate the influence of reversing winds and polar processes in relation to dune slope and morphology. Using numerical experiments, we investigate possible causes of distinctive morphologies seen in Antarctic and martian polar dunes. Finally, we discuss possible model extensions and needed observations that will enable the inclusion of more realistic physical environments in the dune and dune field evolution models. By elucidating the qualitative and quantitative connections between environmental conditions, physical processes, and resultant dune and dune field morphologies, this research furthers our ability to interpret spacecraft images of dune fields, and to use present-day observations to improve our understanding of past terrestrial and martian environments.
Medlyn, Belinda E; De Kauwe, Martin G; Zaehle, Sönke; Walker, Anthony P; Duursma, Remko A; Luus, Kristina; Mishurov, Mikhail; Pak, Bernard; Smith, Benjamin; Wang, Ying-Ping; Yang, Xiaojuan; Crous, Kristine Y; Drake, John E; Gimeno, Teresa E; Macdonald, Catriona A; Norby, Richard J; Power, Sally A; Tjoelker, Mark G; Ellsworth, David S
2016-08-01
The response of terrestrial ecosystems to rising atmospheric CO2 concentration (Ca ), particularly under nutrient-limited conditions, is a major uncertainty in Earth System models. The Eucalyptus Free-Air CO2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodland presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. We applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experiments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluate data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercomparison. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutrient uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements. © 2016 John Wiley & Sons Ltd.
Medlyn, Belinda E.; De Kauwe, Martin G.; Zaehle, Sönke; ...
2016-05-09
One major uncertainty in Earth System models is the response of terrestrial ecosystems to rising atmospheric CO 2 concentration (Ca), particularly under nutrient-lim- ited conditions. The Eucalyptus Free-Air CO 2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodlands, presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. Moreover, we applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experi- ments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluatemore » data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercompari- son. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutri- ent uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Finally, knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medlyn, Belinda E.; De Kauwe, Martin G.; Zaehle, Sönke
One major uncertainty in Earth System models is the response of terrestrial ecosystems to rising atmospheric CO 2 concentration (Ca), particularly under nutrient-lim- ited conditions. The Eucalyptus Free-Air CO 2 Enrichment (EucFACE) experiment, recently established in a nutrient- and water-limited woodlands, presents a unique opportunity to address this uncertainty, but can best do so if key model uncertainties have been identified in advance. Moreover, we applied seven vegetation models, which have previously been comprehensively assessed against earlier forest FACE experi- ments, to simulate a priori possible outcomes from EucFACE. Our goals were to provide quantitative projections against which to evaluatemore » data as they are collected, and to identify key measurements that should be made in the experiment to allow discrimination among alternative model assumptions in a postexperiment model intercompari- son. Simulated responses of annual net primary productivity (NPP) to elevated Ca ranged from 0.5 to 25% across models. The simulated reduction of NPP during a low-rainfall year also varied widely, from 24 to 70%. Key processes where assumptions caused disagreement among models included nutrient limitations to growth; feedbacks to nutri- ent uptake; autotrophic respiration; and the impact of low soil moisture availability on plant processes. Finally, knowledge of the causes of variation among models is now guiding data collection in the experiment, with the expectation that the experimental data can optimally inform future model improvements.« less
Zooming in on neutrino oscillations with DUNE
NASA Astrophysics Data System (ADS)
Srivastava, Rahul; Ternes, Christoph A.; Tórtola, Mariam; Valle, José W. F.
2018-05-01
We examine the capabilities of the DUNE experiment as a probe of the neutrino mixing paradigm. Taking the current status of neutrino oscillations and the design specifications of DUNE, we determine the experiment's potential to probe the structure of neutrino mixing and C P violation. We focus on the poorly determined parameters θ23 and δC P and consider both two and seven years of run. We take various benchmarks as our true values, such as the current preferred values of θ23 and δC P, as well as several theory-motivated choices. We determine quantitatively DUNE's potential to perform a precision measurement of θ23, as well as to test the C P violation hypothesis in a model-independent way. We find that, after running for seven years, DUNE will make a substantial step in the precise determination of these parameters, bringing to quantitative test the predictions of various theories of neutrino mixing.
Variables affecting learning in a simulation experience: a mixed methods study.
Beischel, Kelly P
2013-02-01
The primary purpose of this study was to test a hypothesized model describing the direct effects of learning variables on anxiety and cognitive learning outcomes in a high-fidelity simulation (HFS) experience. The secondary purpose was to explain and explore student perceptions concerning the qualities and context of HFS affecting anxiety and learning. This study used a mixed methods quantitative-dominant explanatory design with concurrent qualitative data collection to examine variables affecting learning in undergraduate, beginning nursing students (N = 124). Being ready to learn, having a strong auditory-verbal learning style, and being prepared for simulation directly affected anxiety, whereas learning outcomes were directly affected by having strong auditory-verbal and hands-on learning styles. Anxiety did not quantitatively mediate cognitive learning outcomes as theorized, although students qualitatively reported debilitating levels of anxiety. This study advances nursing education science by providing evidence concerning variables affecting learning outcomes in HFS.
Quantifying reactive transport processes governing arsenic mobility in a Bengal Delta aquifer
NASA Astrophysics Data System (ADS)
Rawson, Joey; Neidhardt, Harald; Siade, Adam; Berg, Michael; Prommer, Henning
2017-04-01
Over the last few decades significant progress has been made to characterize the extent and severity of groundwater arsenic pollution in S/SE Asia, and to understand the underlying geochemical processes. However, comparably little effort has been made to merge the findings from this research into quantitative frameworks that allow for a process-based quantitative analysis of observed arsenic behavior and predictions of its future fate. Therefore, this study developed and tested field-scale numerical modelling approaches to represent the primary and secondary geochemical processes associated with the reductive dissolution of Fe-oxy(hydr)oxides and the concomitant release of sorbed arsenic. We employed data from an in situ field experiment in the Bengal Delta Plain, which investigated the influence of labile organic matter (sucrose) on the mobility of Fe, Mn, and As. The data collected during the field experiment were used to guide our model development and to constrain the model parameterisation. Our results show that sucrose oxidation coupled to the reductive dissolution of Fe-oxy(hydr)oxides was accompanied by multiple secondary geochemical reactions that are not easily and uniquely identifiable and quantifiable. Those secondary reactions can explain the disparity between the observed Fe and As behavior. Our modelling results suggest that a significant fraction of the released As is scavenged through (co-)precipitation with newly formed Fe-minerals, specifically magnetite, rather than through sorption to pre-existing and freshly precipitated iron minerals.
Quantifying dispersal of southern pine beetles with mark-recapture experiments and a diffusion model
P. Turchin; W.T. Thoeny
1993-01-01
Pest management decisions should take into consideration quantitative information on dispersal of insect pests, but such information is often lacking.The goal of this study was to measure intraforest dispersal in the southern pine beetle (SPB).We developed an analytical formula for interpreting data from mark-recapture studies of insect dispersal.The proposed...
The Relationship between Agriculture Knowledge Bases for Teaching and Sources of Knowledge
ERIC Educational Resources Information Center
Rice, Amber H.; Kitchel, Tracy
2015-01-01
The purpose of this study was to describe the agriculture knowledge bases for teaching of agriculture teachers and to see if a relationship existed between years of teaching experience, sources of knowledge, and development of pedagogical content knowledge (PCK), using quantitative methods. A model of PCK from mathematics was utilized as a…
ERIC Educational Resources Information Center
Butler, Rory
2013-01-01
Internet-enabled mobile devices have increased the accessibility of learning content for students. Given the ubiquitous nature of mobile computing technology, a thorough understanding of the acceptance factors that impact a learner's intention to use mobile technology as an augment to their studies is warranted. Student acceptance of mobile…
USDA-ARS?s Scientific Manuscript database
In recent years, increased awareness of the potential interactions between rising atmospheric CO2 concentrations ([CO2]) and temperature has illustrated the importance of multi-factorial ecosystem manipulation experiments for validating Earth System models. To address the urgent need for increased u...
Friction-term response to boundary-condition type in flow models
Schaffranek, R.W.; Lai, C.
1996-01-01
The friction-slope term in the unsteady open-channel flow equations is examined using two numerical models based on different formulations of the governing equations and employing different solution methods. The purposes of the study are to analyze, evaluate, and demonstrate the behavior of the term in a set of controlled numerical experiments using varied types and combinations of boundary conditions. Results of numerical experiments illustrate that a given model can respond inconsistently for the identical resistance-coefficient value under different types and combinations of boundary conditions. Findings also demonstrate that two models employing different dependent variables and solution methods can respond similarly for the identical resistance-coefficient value under similar types and combinations of boundary conditions. Discussion of qualitative considerations and quantitative experimental results provides insight into the proper treatment, evaluation, and significance of the friction-slope term, thereby offering practical guidelines for model implementation and calibration.
Hinckley, Daniel M.; Freeman, Gordon S.; Whitmer, Jonathan K.; de Pablo, Juan J.
2013-01-01
A new 3-Site-Per-Nucleotide coarse-grained model for DNA is presented. The model includes anisotropic potentials between bases involved in base stacking and base pair interactions that enable the description of relevant structural properties, including the major and minor grooves. In an improvement over available coarse-grained models, the correct persistence length is recovered for both ssDNA and dsDNA, allowing for simulation of non-canonical structures such as hairpins. DNA melting temperatures, measured for duplexes and hairpins by integrating over free energy surfaces generated using metadynamics simulations, are shown to be in quantitative agreement with experiment for a variety of sequences and conditions. Hybridization rate constants, calculated using forward-flux sampling, are also shown to be in good agreement with experiment. The coarse-grained model presented here is suitable for use in biological and engineering applications, including nucleosome positioning and DNA-templated engineering. PMID:24116642
Nonlinear-programming mathematical modeling of coal blending for power plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Longhua; Zhou Junhu; Yao Qiang
At present most of the blending works are guided by experience or linear-programming (LP) which can not reflect the coal complicated characteristics properly. Experimental and theoretical research work shows that most of the coal blend properties can not always be measured as a linear function of the properties of the individual coals in the blend. The authors introduced nonlinear functions or processes (including neural network and fuzzy mathematics), established on the experiments directed by the authors and other researchers, to quantitatively describe the complex coal blend parameters. Finally nonlinear-programming (NLP) mathematical modeling of coal blend is introduced and utilized inmore » the Hangzhou Coal Blending Center. Predictions based on the new method resulted in different results from the ones based on LP modeling. The authors concludes that it is very important to introduce NLP modeling, instead of NL modeling, into the work of coal blending.« less
A dual memory theory of the testing effect.
Rickard, Timothy C; Pan, Steven C
2017-06-05
A new theoretical framework for the testing effect-the finding that retrieval practice is usually more effective for learning than are other strategies-is proposed, the empirically supported tenet of which is that separate memories form as a consequence of study and test events. A simplest case quantitative model is derived from that framework for the case of cued recall. With no free parameters, that model predicts both proportion correct in the test condition and the magnitude of the testing effect across 10 experiments conducted in our laboratory, experiments that varied with respect to material type, retention interval, and performance in the restudy condition. The model also provides the first quantitative accounts of (a) the testing effect as a function of performance in the restudy condition, (b) the upper bound magnitude of the testing effect, (c) the effect of correct answer feedback, (d) the testing effect as a function of retention interval for the cases of feedback and no feedback, and (e) the effect of prior learning method on subsequent learning through testing. Candidate accounts of several other core phenomena in the literature, including test-potentiated learning, recognition versus cued recall training effects, cued versus free recall final test effects, and other select transfer effects, are also proposed. Future prospects and relations to other theories are discussed.
A Qualitative-Quantitative H-NMR Experiment for the Instrumental Analysis Laboratory.
ERIC Educational Resources Information Center
Phillips, John S.; Leary, James J.
1986-01-01
Describes an experiment combining qualitative and quantitative information from hydrogen nuclear magnetic resonance spectra. Reviews theory, discusses the experimental approach, and provides sample results. (JM)
Reducible or irreducible? Mathematical reasoning and the ontological method.
Fisher, William P
2010-01-01
Science is often described as nothing but the practice of measurement. This perspective follows from longstanding respect for the roles mathematics and quantification have played as media through which alternative hypotheses are evaluated and experience becomes better managed. Many figures in the history of science and psychology have contributed to what has been called the "quantitative imperative," the demand that fields of study employ number and mathematics even when they do not constitute the language in which investigators think together. But what makes an area of study scientific is, of course, not the mere use of number, but communities of investigators who share common mathematical languages for exchanging quantitative and quantitative value. Such languages require rigorous theoretical underpinning, a basis in data sufficient to the task, and instruments traceable to reference standard quantitative metrics. The values shared and exchanged by such communities typically involve the application of mathematical models that specify the sufficient and invariant relationships necessary for rigorous theorizing and instrument equating. The mathematical metaphysics of science are explored with the aim of connecting principles of quantitative measurement with the structures of sufficient reason.
Qualitative versus quantitative methods in psychiatric research.
Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S
2012-01-01
Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barani, T.; Bruschi, E.; Pizzocri, D.
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less
ERIC Educational Resources Information Center
Bazzi, Ali; Kreuz, Bette; Fischer, Jeffrey
2004-01-01
An experiment for determination of calcium in cereal using two-increment standard addition method in conjunction with flame atomic absorption spectroscopy (FAAS) is demonstrated. The experiment is intended to introduce students to the principles of atomic absorption spectroscopy giving them hands on experience using quantitative methods of…
Dimov, Alexey V; Liu, Zhe; Spincemaille, Pascal; Prince, Martin R; Du, Jiang; Wang, Yi
2018-01-01
To develop quantitative susceptibility mapping (QSM) of bone using an ultrashort echo time (UTE) gradient echo (GRE) sequence for signal acquisition and a bone-specific effective transverse relaxation rate ( R2*) to model water-fat MR signals for field mapping. Three-dimensional radial UTE data (echo times ≥ 40 μs) was acquired on a 3 Tesla scanner and fitted with a bone-specific signal model to map the chemical species and susceptibility field. Experiments were performed ex vivo on a porcine hoof and in vivo on healthy human subjects (n = 7). For water-fat separation, a bone-specific model assigning R2* decay mostly to water was compared with the standard models that assigned the same decay for both fat and water. In the ex vivo experiment, bone QSM was correlated with CT. Compared with standard models, the bone-specific R2* method significantly reduced errors in the fat fraction within the cortical bone in all tested data sets, leading to reduced artifacts in QSM. Good correlation was found between bone CT and QSM values in the porcine hoof (R 2 = 0.77). Bone QSM was successfully generated in all subjects. The QSM of bone is feasible using UTE with a conventional echo time GRE acquisition and a bone-specific R2* signal model. Magn Reson Med 79:121-128, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Validation of model predictions of pore-scale fluid distributions during two-phase flow
NASA Astrophysics Data System (ADS)
Bultreys, Tom; Lin, Qingyang; Gao, Ying; Raeini, Ali Q.; AlRatrout, Ahmed; Bijeljic, Branko; Blunt, Martin J.
2018-05-01
Pore-scale two-phase flow modeling is an important technology to study a rock's relative permeability behavior. To investigate if these models are predictive, the calculated pore-scale fluid distributions which determine the relative permeability need to be validated. In this work, we introduce a methodology to quantitatively compare models to experimental fluid distributions in flow experiments visualized with microcomputed tomography. First, we analyzed five repeated drainage-imbibition experiments on a single sample. In these experiments, the exact fluid distributions were not fully repeatable on a pore-by-pore basis, while the global properties of the fluid distribution were. Then two fractional flow experiments were used to validate a quasistatic pore network model. The model correctly predicted the fluid present in more than 75% of pores and throats in drainage and imbibition. To quantify what this means for the relevant global properties of the fluid distribution, we compare the main flow paths and the connectivity across the different pore sizes in the modeled and experimental fluid distributions. These essential topology characteristics matched well for drainage simulations, but not for imbibition. This suggests that the pore-filling rules in the network model we used need to be improved to make reliable predictions of imbibition. The presented analysis illustrates the potential of our methodology to systematically and robustly test two-phase flow models to aid in model development and calibration.
Quantitative evaluation methods of skin condition based on texture feature parameters.
Pang, Hui; Chen, Tianhua; Wang, Xiaoyi; Chang, Zhineng; Shao, Siqi; Zhao, Jing
2017-03-01
In order to quantitatively evaluate the improvement of the skin condition after using skin care products and beauty, a quantitative evaluation method for skin surface state and texture is presented, which is convenient, fast and non-destructive. Human skin images were collected by image sensors. Firstly, the median filter of the 3 × 3 window is used and then the location of the hairy pixels on the skin is accurately detected according to the gray mean value and color information. The bilinear interpolation is used to modify the gray value of the hairy pixels in order to eliminate the negative effect of noise and tiny hairs on the texture. After the above pretreatment, the gray level co-occurrence matrix (GLCM) is calculated. On the basis of this, the four characteristic parameters, including the second moment, contrast, entropy and correlation, and their mean value are calculated at 45 ° intervals. The quantitative evaluation model of skin texture based on GLCM is established, which can calculate the comprehensive parameters of skin condition. Experiments show that using this method evaluates the skin condition, both based on biochemical indicators of skin evaluation methods in line, but also fully consistent with the human visual experience. This method overcomes the shortcomings of the biochemical evaluation method of skin damage and long waiting time, also the subjectivity and fuzziness of the visual evaluation, which achieves the non-destructive, rapid and quantitative evaluation of skin condition. It can be used for health assessment or classification of the skin condition, also can quantitatively evaluate the subtle improvement of skin condition after using skin care products or stage beauty.
Error Discounting in Probabilistic Category Learning
Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.
2011-01-01
Some current theories of probabilistic categorization assume that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report two probabilistic-categorization experiments that investigated error discounting by shifting feedback probabilities to new values after different amounts of training. In both experiments, responding gradually became less responsive to errors, and learning was slowed for some time after the feedback shift. Both results are indicative of error discounting. Quantitative modeling of the data revealed that adding a mechanism for error discounting significantly improved the fits of an exemplar-based and a rule-based associative learning model, as well as of a recency-based model of categorization. We conclude that error discounting is an important component of probabilistic learning. PMID:21355666
A rational account of pedagogical reasoning: teaching by, and learning from, examples.
Shafto, Patrick; Goodman, Noah D; Griffiths, Thomas L
2014-06-01
Much of learning and reasoning occurs in pedagogical situations--situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher's examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning. Copyright © 2014 Elsevier Inc. All rights reserved.
Halty, Virginia; Valdés, Matías; Tejera, Mauricio; Picasso, Valentín; Fort, Hugo
2017-12-01
The contribution of plant species richness to productivity and ecosystem functioning is a longstanding issue in ecology, with relevant implications for both conservation and agriculture. Both experiments and quantitative modeling are fundamental to the design of sustainable agroecosystems and the optimization of crop production. We modeled communities of perennial crop mixtures by using a generalized Lotka-Volterra model, i.e., a model such that the interspecific interactions are more general than purely competitive. We estimated model parameters -carrying capacities and interaction coefficients- from, respectively, the observed biomass of monocultures and bicultures measured in a large diversity experiment of seven perennial forage species in Iowa, United States. The sign and absolute value of the interaction coefficients showed that the biological interactions between species pairs included amensalism, competition, and parasitism (asymmetric positive-negative interaction), with various degrees of intensity. We tested the model fit by simulating the combinations of more than two species and comparing them with the polycultures experimental data. Overall, theoretical predictions are in good agreement with the experiments. Using this model, we also simulated species combinations that were not sown. From all possible mixtures (sown and not sown) we identified which are the most productive species combinations. Our results demonstrate that a combination of experiments and modeling can contribute to the design of sustainable agricultural systems in general and to the optimization of crop production in particular. © 2017 by the Ecological Society of America.
The SAGE Model of Social Psychological Research.
Power, Séamus A; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-05-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed.
A toolbox for discrete modelling of cell signalling dynamics.
Paterson, Yasmin Z; Shorthouse, David; Pleijzier, Markus W; Piterman, Nir; Bendtsen, Claus; Hall, Benjamin A; Fisher, Jasmin
2018-06-18
In an age where the volume of data regarding biological systems exceeds our ability to analyse it, many researchers are looking towards systems biology and computational modelling to help unravel the complexities of gene and protein regulatory networks. In particular, the use of discrete modelling allows generation of signalling networks in the absence of full quantitative descriptions of systems, which are necessary for ordinary differential equation (ODE) models. In order to make such techniques more accessible to mainstream researchers, tools such as the BioModelAnalyzer (BMA) have been developed to provide a user-friendly graphical interface for discrete modelling of biological systems. Here we use the BMA to build a library of discrete target functions of known canonical molecular interactions, translated from ordinary differential equations (ODEs). We then show that these BMA target functions can be used to reconstruct complex networks, which can correctly predict many known genetic perturbations. This new library supports the accessibility ethos behind the creation of BMA, providing a toolbox for the construction of complex cell signalling models without the need for extensive experience in computer programming or mathematical modelling, and allows for construction and simulation of complex biological systems with only small amounts of quantitative data.
Explaining quantum correlations through evolution of causal models
NASA Astrophysics Data System (ADS)
Harper, Robin; Chapman, Robert J.; Ferrie, Christopher; Granade, Christopher; Kueng, Richard; Naoumenko, Daniel; Flammia, Steven T.; Peruzzo, Alberto
2017-04-01
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multiobjective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
Magnitude of the magnetic exchange interaction in the heavy-fermion antiferromagnet CeRhIn 5
Das, Pinaki; Lin, S. -Z.; Ghimire, N. J.; ...
2014-12-08
We have used high-resolution neutron spectroscopy experiments to determine the complete spin wave spectrum of the heavy-fermion antiferromagnet CeRhIn₅. The spin wave dispersion can be quantitatively reproduced with a simple frustrated J₁-J₂ model that also naturally explains the magnetic spin-spiral ground state of CeRhIn₅ and yields a dominant in-plane nearest-neighbor magnetic exchange constant J₀=0.74(3) meV. Our results lead the way to a quantitative understanding of the rich low-temperature phase diagram of the prominent CeTIn₅ (T = Co, Rh, Ir) class of heavy-fermion materials.
Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation
NASA Astrophysics Data System (ADS)
Choi, J.; Raguin, L. G.
2010-10-01
Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.
Toward a quantitative theory of food consumption choices and body weight.
Buttet, Sebastien; Dolar, Veronika
2015-04-01
We propose a calibrated dynamic model of food consumption choices and body weight to study changes in daily caloric intake, weight, and the away-from-home share of calories consumed by adult men and women in the U.S. during the period between 1971 and 2006. Calibration reveals substantial preference heterogeneity between men and women. For example, utility losses stemming from weight gains are ten times greater for women compared to men. Counterfactual experiments show that changes in food prices and household income account for half of the increase in weight of adult men, but only a small fraction of women's weight. We argue that quantitative models of food consumption choices and body weight have a unique role to play in future research in the economics of obesity. Copyright © 2014 Elsevier B.V. All rights reserved.
Watkins, Daphne C.; Wharton, Tracy; Mitchell, Jamie A.; Matusko, Niki; Kales, Helen
2016-01-01
The purpose of this study was to explore the role of non-spousal family support on mental health among older, church-going African American men. The mixed methods objective was to employ a design that used existing qualitative and quantitative data to explore the interpretive context within which social and cultural experiences occur. Qualitative data (n=21) were used to build a conceptual model that was tested using quantitative data (n= 401). Confirmatory factor analysis indicated an inverse association between non-spousal family support and distress. The comparative fit index, Tucker-Lewis fit index, and root mean square error of approximation indicated good model fit. This study offers unique methodological approaches to using existing, complementary data sources to understand the health of African American men. PMID:28943829
Quantitative and Functional Requirements for Bioluminescent Cancer Models.
Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier
2016-01-01
Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS
NASA Astrophysics Data System (ADS)
Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.
2017-04-01
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.
Pargett, Michael; Rundell, Ann E.; Buzzard, Gregery T.; Umulis, David M.
2014-01-01
Discovery in developmental biology is often driven by intuition that relies on the integration of multiple types of data such as fluorescent images, phenotypes, and the outcomes of biochemical assays. Mathematical modeling helps elucidate the biological mechanisms at play as the networks become increasingly large and complex. However, the available data is frequently under-utilized due to incompatibility with quantitative model tuning techniques. This is the case for stem cell regulation mechanisms explored in the Drosophila germarium through fluorescent immunohistochemistry. To enable better integration of biological data with modeling in this and similar situations, we have developed a general parameter estimation process to quantitatively optimize models with qualitative data. The process employs a modified version of the Optimal Scaling method from social and behavioral sciences, and multi-objective optimization to evaluate the trade-off between fitting different datasets (e.g. wild type vs. mutant). Using only published imaging data in the germarium, we first evaluated support for a published intracellular regulatory network by considering alternative connections of the same regulatory players. Simply screening networks against wild type data identified hundreds of feasible alternatives. Of these, five parsimonious variants were found and compared by multi-objective analysis including mutant data and dynamic constraints. With these data, the current model is supported over the alternatives, but support for a biochemically observed feedback element is weak (i.e. these data do not measure the feedback effect well). When also comparing new hypothetical models, the available data do not discriminate. To begin addressing the limitations in data, we performed a model-based experiment design and provide recommendations for experiments to refine model parameters and discriminate increasingly complex hypotheses. PMID:24626201
Tractable Experiment Design via Mathematical Surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less
NASA Technical Reports Server (NTRS)
Musick, H. Brad
1993-01-01
The objectives of this research are: to develop and test predictive relations for the quantitative influence of vegetation canopy structure on wind erosion of semiarid rangeland soils, and to develop remote sensing methods for measuring the canopy structural parameters that determine sheltering against wind erosion. The influence of canopy structure on wind erosion will be investigated by means of wind-tunnel and field experiments using structural variables identified by the wind-tunnel and field experiments using model roughness elements to simulate plant canopies. The canopy structural variables identified by the wind-tunnel and field experiments as important in determining vegetative sheltering against wind erosion will then be measured at a number of naturally vegetated field sites and compared with estimates of these variables derived from analysis of remotely sensed data.
Remote sensing image denoising application by generalized morphological component analysis
NASA Astrophysics Data System (ADS)
Yu, Chong; Chen, Xiong
2014-12-01
In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.
The use of phenomenology in mental health nursing research.
Picton, Caroline Jane; Moxham, Lorna; Patterson, Christopher
2017-12-18
Historically, mental health research has been strongly influenced by the underlying positivism of the quantitative paradigm. Quantitative research dominates scientific enquiry and contributes significantly to understanding our natural world. It has also greatly benefitted the medical model of healthcare. However, the more literary, silent, qualitative approach is gaining prominence in human sciences research, particularly mental healthcare research. To examine the qualitative methodological assumptions of phenomenology to illustrate the benefits to mental health research of studying the experiences of people with mental illness. Phenomenology is well positioned to ask how people with mental illness reflect on their experiences. Phenomenological research is congruent with the principles of contemporary mental healthcare, as person-centred care is favoured at all levels of mental healthcare, treatment, service and research. Phenomenology is a highly appropriate and suitable methodology for mental health research, given it includes people's experiences and enables silent voices to be heard. This overview of the development of phenomenology informs researchers new to phenomenological enquiry. ©2017 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.
Quantitative laser diagnostic and modeling study of C2 and CH chemistry in combustion.
Köhler, Markus; Brockhinke, Andreas; Braun-Unkhoff, Marina; Kohse-Höinghaus, Katharina
2010-04-15
Quantitative concentration measurements of CH and C(2) have been performed in laminar, premixed, flat flames of propene and cyclopentene with varying stoichiometry. A combination of cavity ring-down (CRD) spectroscopy and laser-induced fluorescence (LIF) was used to enable sensitive detection of these species with high spatial resolution. Previously, CH and C(2) chemistry had been studied, predominantly in methane flames, to understand potential correlations of their formation and consumption. For flames of larger hydrocarbon fuels, however, quantitative information on these small intermediates is scarce, especially under fuel-rich conditions. Also, the combustion chemistry of C(2) in particular has not been studied in detail, and although it has often been observed, its role in potential build-up reactions of higher hydrocarbon species is not well understood. The quantitative measurements performed here are the first to detect both species with good spatial resolution and high sensitivity in the same experiment in flames of C(3) and C(5) fuels. The experimental profiles were compared with results of combustion modeling to reveal details of the formation and consumption of these important combustion molecules, and the investigation was devoted to assist the further understanding of the role of C(2) and of its potential chemical interdependences with CH and other small radicals.
Beckett, Kate; Earthy, Sarah; Sleney, Jude; Barnes, Jo; Kellezi, Blerina; Barker, Marcus; Clarkson, Julie; Coffey, Frank; Elder, Georgina; Kendrick, Denise
2014-07-08
To explore views of service providers caring for injured people on: the extent to which services meet patients' needs and their perspectives on factors contributing to any identified gaps in service provision. Qualitative study nested within a quantitative multicentre longitudinal study assessing longer term impact of unintentional injuries in working age adults. Sampling frame for service providers was based on patient-reported service use in the quantitative study, patient interviews and advice of previously injured lay research advisers. Service providers' views were elicited through semistructured interviews. Data were analysed using thematic analysis. Participants were recruited from a range of settings and services in acute hospital trusts in four study centres (Bristol, Leicester, Nottingham and Surrey) and surrounding areas. 40 service providers from a range of disciplines. Service providers described two distinct models of trauma care: an 'ideal' model, informed by professional knowledge of the impact of injury and awareness of best models of care, and a 'real' model based on the realities of National Health Service (NHS) practice. Participants' 'ideal' model was consistent with standards of high-quality effective trauma care and while there were examples of services meeting the ideal model, 'real' care could also be fragmented and inequitable with major gaps in provision. Service provider accounts provide evidence of comprehensive understanding of patients' needs, awareness of best practice, compassion and research but reveal significant organisational and resource barriers limiting implementation of knowledge in practice. Service providers envisage an 'ideal' model of trauma care which is timely, equitable, effective and holistic, but this can differ from the care currently provided. Their experiences provide many suggestions for service improvements to bridge the gap between 'real' and 'ideal' care. Using service provider views to inform service design and delivery could enhance the quality, patient experience and outcomes of care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Li, Hao; Lei, Xiaoguang; Huang, Baihui; Rizak, Joshua D; Yang, Lichuan; Yang, Shangchuan; Wu, Jing; Lü, Longbao; Wang, Jianhong; Yan, Ting; Li, Hongwei; Wang, Zhengbo; Hu, Yingzhou; Le, Weidong; Deng, Xingli; Li, Jiali; Xu, Lin; Zhang, Baorong; Hu, Xintian
2015-08-15
Non-human primate Parkinson's disease (PD) models are essential for PD research. The most extensively used PD monkey models are induced with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). However, the modeling processes of developing PD monkeys cannot be quantitatively controlled with MPTP. Therefore, a new approach to quantitatively develop chronic PD monkey models will help to advance the goals of "reduction, replacement and refinement" in animal experiments. A novel chronic PD monkey models was reported using the intracerebroventricular administration of 1-methyl-4-phenylpyridinium (MPP(+)) in Cynomolgus monkeys (Macaca fascicularis). This approach successfully produced stable and consistent PD monkeys with typical motor symptoms and pathological changes. More importantly, a sigmoidal relationship (Y=8.15801e(-0.245/x); R=0.73) was discovered between PD score (Y) and cumulative dose of MPP(+) (X). This relationship was then used to develop two additional PD monkeys under a specific time schedule (4 weeks), with planned PD scores (7) by controlling the dose and frequency of the MPP(+) administration as an independent validation of the formula. We developed Parkinsonian monkeys within controlled time frames by regulating the accumulated dose of MPP(+) intracerebroventricular administered, while limiting side effects often witnessed in models developed with the peripheral administration of MPTP, makes this model highly suitable for treatment development. This novel approach provides an edge in evaluating the mechanisms of PD pathology associated with environmental toxins and novel treatment approaches as the formula developed provides a "map" to control and predict the modeling processes. Copyright © 2015 Elsevier B.V. All rights reserved.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juanes, Ruben
The overall goals of this research are: (1) to determine the physical fate of single and multiple methane bubbles emitted to the water column by dissociating gas hydrates at seep sites deep within the hydrate stability zone or at the updip limit of gas hydrate stability, and (2) to quantitatively link theoretical and laboratory findings on methane transport to the analysis of real-world field-scale methane plume data placed within the context of the degrading methane hydrate province on the US Atlantic margin. The project is arranged to advance on three interrelated fronts (numerical modeling, laboratory experiments, and analysis of field-basedmore » plume data) simultaneously. The fundamental objectives of each component are the following: Numerical modeling: Constraining the conditions under which rising bubbles become armored with hydrate, the impact of hydrate armoring on the eventual fate of a bubble’s methane, and the role of multiple bubble interactions in survival of methane plumes to very shallow depths in the water column. Laboratory experiments: Exploring the parameter space (e.g., bubble size, gas saturation in the liquid phase, “proximity” to the stability boundary) for formation of a hydrate shell around a free bubble in water, the rise rate of such bubbles, and the bubble’s acoustic characteristics using field-scale frequencies. Field component: Extending the results of numerical modeling and laboratory experiments to the field-scale using brand new, existing, public-domain, state-of-the-art real world data on US Atlantic margin methane seeps, without acquiring new field data in the course of this particular project. This component quantitatively analyzes data on Atlantic margin methane plumes and place those new plumes and their corresponding seeps within the context of gas hydrate degradation processes on this margin.« less
Modelling viscoacoustic wave propagation with the lattice Boltzmann method.
Xia, Muming; Wang, Shucheng; Zhou, Hui; Shan, Xiaowen; Chen, Hanming; Li, Qingqing; Zhang, Qingchen
2017-08-31
In this paper, the lattice Boltzmann method (LBM) is employed to simulate wave propagation in viscous media. LBM is a kind of microscopic method for modelling waves through tracking the evolution states of a large number of discrete particles. By choosing different relaxation times in LBM experiments and using spectrum ratio method, we can reveal the relationship between the quality factor Q and the parameter τ in LBM. A two-dimensional (2D) homogeneous model and a two-layered model are tested in the numerical experiments, and the LBM results are compared against the reference solution of the viscoacoustic equations based on the Kelvin-Voigt model calculated by finite difference method (FDM). The wavefields and amplitude spectra obtained by LBM coincide with those by FDM, which demonstrates the capability of the LBM with one relaxation time. The new scheme is relatively simple and efficient to implement compared with the traditional lattice methods. In addition, through a mass of experiments, we find that the relaxation time of LBM has a quantitative relationship with Q. Such a novel scheme offers an alternative forward modelling kernel for seismic inversion and a new model to describe the underground media.
XDesign: an open-source software package for designing X-ray imaging phantoms and experiments.
Ching, Daniel J; Gürsoy, Dogˇa
2017-03-01
The development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.
XDesign: An open-source software package for designing X-ray imaging phantoms and experiments
Ching, Daniel J.; Gursoy, Dogˇa
2017-02-21
Here, the development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.
Fast inertial particle manipulation in oscillating flows
NASA Astrophysics Data System (ADS)
Thameem, Raqeeb; Rallabandi, Bhargav; Hilgenfeldt, Sascha
2017-05-01
It is demonstrated that micron-sized particles suspended in fluid near oscillating interfaces experience strong inertial displacements above and beyond the fluid streaming. Experiments with oscillating bubbles show rectified particle lift over extraordinarily short (millisecond) times. A quantitative model on both the oscillatory and the steady time scales describes the particle displacement relative to the fluid motion. The formalism yields analytical predictions confirming the observed scaling behavior with particle size and experimental control parameters. It applies to a large class of oscillatory flows with applications from particle trapping to size sorting.
Quantitative In Vivo Imaging of Breast Tumor Extracellular Matrix
2010-05-01
dermis from mouse models of Osteogenesis Imperfecta (OIM) [1–5,7]. The F/B ratio revealed the length scale of ordering in the fibers. In these...imaging of the diseased state osteogenesis imperfecta : experiment and simulation,” Biophys. J. 94(11), 4504–4514 (2008). 3. O. Nadiarnykh, R. B. Lacomb...breast cancer, and dermis from mouse models of Osteogenesis Imperfecta (OIM) [1–5,7]. The F/B ratio revealed the length scale of ordering in the fibers
Cryogenic Tank Modeling for the Saturn AS-203 Experiment
NASA Technical Reports Server (NTRS)
Grayson, Gary D.; Lopez, Alfredo; Chandler, Frank O.; Hastings, Leon J.; Tucker, Stephen P.
2006-01-01
A computational fluid dynamics (CFD) model is developed for the Saturn S-IVB liquid hydrogen (LH2) tank to simulate the 1966 AS-203 flight experiment. This significant experiment is the only known, adequately-instrumented, low-gravity, cryogenic self pressurization test that is well suited for CFD model validation. A 4000-cell, axisymmetric model predicts motion of the LH2 surface including boil-off and thermal stratification in the liquid and gas phases. The model is based on a modified version of the commercially available FLOW3D software. During the experiment, heat enters the LH2 tank through the tank forward dome, side wall, aft dome, and common bulkhead. In both model and test the liquid and gases thermally stratify in the low-gravity natural convection environment. LH2 boils at the free surface which in turn increases the pressure within the tank during the 5360 second experiment. The Saturn S-IVB tank model is shown to accurately simulate the self pressurization and thermal stratification in the 1966 AS-203 test. The average predicted pressurization rate is within 4% of the pressure rise rate suggested by test data. Ullage temperature results are also in good agreement with the test where the model predicts an ullage temperature rise rate within 6% of the measured data. The model is based on first principles only and includes no adjustments to bring the predictions closer to the test data. Although quantitative model validation is achieved or one specific case, a significant step is taken towards demonstrating general use of CFD for low-gravity cryogenic fluid modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Gaurav; Raju, Mandhapati P.; Sung, Chih-Jen
2010-07-15
In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluidmore » dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)« less
Microwave Remote Sensing and the Cold Land Processes Field Experiment
NASA Technical Reports Server (NTRS)
Kim, Edward J.; Cline, Don; Davis, Bert; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
The Cold Land Processes Field Experiment (CLPX) has been designed to advance our understanding of the terrestrial cryosphere. Developing a more complete understanding of fluxes, storage, and transformations of water and energy in cold land areas is a critical focus of the NASA Earth Science Enterprise Research Strategy, the NASA Global Water and Energy Cycle (GWEC) Initiative, the Global Energy and Water Cycle Experiment (GEWEX), and the GEWEX Americas Prediction Project (GAPP). The movement of water and energy through cold regions in turn plays a large role in ecological activity and biogeochemical cycles. Quantitative understanding of cold land processes over large areas will require synergistic advancements in 1) understanding how cold land processes, most comprehensively understood at local or hillslope scales, extend to larger scales, 2) improved representation of cold land processes in coupled and uncoupled land-surface models, and 3) a breakthrough in large-scale observation of hydrologic properties, including snow characteristics, soil moisture, the extent of frozen soils, and the transition between frozen and thawed soil conditions. The CLPX Plan has been developed through the efforts of over 60 interested scientists that have participated in the NASA Cold Land Processes Working Group (CLPWG). This group is charged with the task of assessing, planning and implementing the required background science, technology, and application infrastructure to support successful land surface hydrology remote sensing space missions. A major product of the experiment will be a comprehensive, legacy data set that will energize many aspects of cold land processes research. The CLPX will focus on developing the quantitative understanding, models, and measurements necessary to extend our local-scale understanding of water fluxes, storage, and transformations to regional and global scales. The experiment will particularly emphasize developing a strong synergism between process-oriented understanding, land surface models and microwave remote sensing. The experimental design is a multi-sensor, multi-scale (1-ha to 160,000 km ^ {2}) approach to providing the comprehensive data set necessary to address several experiment objectives. A description focusing on the microwave remote sensing components (ground, airborne, and spaceborne) of the experiment will be presented.
X-ray radiography of cavitation in a beryllium alloy nozzle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, Daniel J.; Matusik, Katarzyna E.; Kastengren, Alan L.
In this study, making quantitative measurements of the vapor distribution in a cavitating nozzle is difficult, owing to the strong scattering of visible light at gas–liquid boundaries and wall boundaries, and the small lengths and time scales involved. The transparent models required for optical experiments are also limited in terms of maximum pressure and operating life. Over the past few years, x-ray radiography experiments at Argonne’s Advanced Photon Source have demonstrated the ability to perform quantitative measurements of the line of sight projected vapor fraction in submerged, cavitating plastic nozzles. In this paper, we present the results of new radiographymore » experiments performed on a submerged beryllium nozzle which is 520 μm in diameter, with a length/diameter ratio of 6. Beryllium is a light, hard metal that is very transparent to x-rays due to its low atomic number. We present quantitative measurements of cavitation vapor distribution conducted over a range of non-dimensional cavitation and Reynolds numbers, up to values typical of gasoline and diesel fuel injectors. A novel aspect of this work is the ability to quantitatively measure the area contraction along the nozzle with high spatial resolution. Analysis of the vapor distribution, area contraction and discharge coefficients are made between the beryllium nozzle and plastic nozzles of the same nominal geometry. When gas is dissolved in the fuel, the vapor distribution can be quite different from that found in plastic nozzles of the same dimensions, although the discharge coefficients are unaffected. In the beryllium nozzle, there were substantially fewer machining defects to act as nucleation sites for the precipitation of bubbles from dissolved gases in the fuel, and as such the effect on the vapor distribution was greatly reduced.« less
Goeminne, Ludger J E; Gevaert, Kris; Clement, Lieven
2018-01-16
Label-free shotgun proteomics is routinely used to assess proteomes. However, extracting relevant information from the massive amounts of generated data remains difficult. This tutorial provides a strong foundation on analysis of quantitative proteomics data. We provide key statistical concepts that help researchers to design proteomics experiments and we showcase how to analyze quantitative proteomics data using our recent free and open-source R package MSqRob, which was developed to implement the peptide-level robust ridge regression method for relative protein quantification described by Goeminne et al. MSqRob can handle virtually any experimental proteomics design and outputs proteins ordered by statistical significance. Moreover, its graphical user interface and interactive diagnostic plots provide easy inspection and also detection of anomalies in the data and flaws in the data analysis, allowing deeper assessment of the validity of results and a critical review of the experimental design. Our tutorial discusses interactive preprocessing, data analysis and visualization of label-free MS-based quantitative proteomics experiments with simple and more complex designs. We provide well-documented scripts to run analyses in bash mode on GitHub, enabling the integration of MSqRob in automated pipelines on cluster environments (https://github.com/statOmics/MSqRob). The concepts outlined in this tutorial aid in designing better experiments and analyzing the resulting data more appropriately. The two case studies using the MSqRob graphical user interface will contribute to a wider adaptation of advanced peptide-based models, resulting in higher quality data analysis workflows and more reproducible results in the proteomics community. We also provide well-documented scripts for experienced users that aim at automating MSqRob on cluster environments. Copyright © 2017 Elsevier B.V. All rights reserved.
X-ray radiography of cavitation in a beryllium alloy nozzle
Duke, Daniel J.; Matusik, Katarzyna E.; Kastengren, Alan L.; ...
2017-01-17
In this study, making quantitative measurements of the vapor distribution in a cavitating nozzle is difficult, owing to the strong scattering of visible light at gas–liquid boundaries and wall boundaries, and the small lengths and time scales involved. The transparent models required for optical experiments are also limited in terms of maximum pressure and operating life. Over the past few years, x-ray radiography experiments at Argonne’s Advanced Photon Source have demonstrated the ability to perform quantitative measurements of the line of sight projected vapor fraction in submerged, cavitating plastic nozzles. In this paper, we present the results of new radiographymore » experiments performed on a submerged beryllium nozzle which is 520 μm in diameter, with a length/diameter ratio of 6. Beryllium is a light, hard metal that is very transparent to x-rays due to its low atomic number. We present quantitative measurements of cavitation vapor distribution conducted over a range of non-dimensional cavitation and Reynolds numbers, up to values typical of gasoline and diesel fuel injectors. A novel aspect of this work is the ability to quantitatively measure the area contraction along the nozzle with high spatial resolution. Analysis of the vapor distribution, area contraction and discharge coefficients are made between the beryllium nozzle and plastic nozzles of the same nominal geometry. When gas is dissolved in the fuel, the vapor distribution can be quite different from that found in plastic nozzles of the same dimensions, although the discharge coefficients are unaffected. In the beryllium nozzle, there were substantially fewer machining defects to act as nucleation sites for the precipitation of bubbles from dissolved gases in the fuel, and as such the effect on the vapor distribution was greatly reduced.« less
When More of A Doesn't Result in More of B: Physics Experiments with a Surprising Outcome
ERIC Educational Resources Information Center
Tsakmaki, Paraskevi; Koumaras, Panagiotis
2016-01-01
Science education research has shown that students use causal reasoning, particularly the model "agent--instrument--object," to explain or predict the outcome of many natural situations. Students' reasoning seems to be based on a small set of few intuitive rules. One of these rules quantitatively correlates the outcome of an experiment…
ERIC Educational Resources Information Center
Gülpinar, Mehmet Ali; Isoglu-Alkaç, Ümmühan; Yegen, Berrak Çaglayan
2015-01-01
Recently, integrated and contextual learning models such as problem-based learning (PBL) and brain/mind learning (BML) have become prominent. The present study aimed to develop and evaluate a PBL program enriched with BML principles. In this study, participants were 295 first-year medical students. The study used both quantitative and qualitative…
ERIC Educational Resources Information Center
Tavukcu, Tahir
2016-01-01
In this research, it is aimed to determine the effect of the attitudes of postgraduate students towards scientific research and codes of conduct, supported by digital script. This research is a quantitative study, and it has been formed according to pre-test & post-test research model of experiment and control group. In both groups, lessons…
Toward Quantifying the Electrostatic Transduction Mechanism in Carbon Nanotube Biomolecular Sensors
NASA Astrophysics Data System (ADS)
Lerner, Mitchell; Kybert, Nicholas; Mendoza, Ryan; Dailey, Jennifer; Johnson, A. T. Charlie
2013-03-01
Despite the great promise of carbon nanotube field-effect transistors (CNT FETs) for applications in chemical and biochemical detection, a quantitative understanding of sensor responses is lacking. To explore the role of electrostatics in sensor transduction, experiments were conducted with a set of similar compounds designed to adsorb onto the CNT FET via a pyrene linker group and take on a set of known charge states under ambient conditions. Acidic and basic species were observed to induce threshold voltage shifts of opposite sign, consistent with gating of the CNT FET by local charges due to protonation or deprotonation of the pyrene compounds by interfacial water. The magnitude of the gate voltage shift was controlled by the distance between the charged group and the CNT. Additionally, functionalization with an uncharged pyrene compound showed a threshold shift ascribed to its molecular dipole moment. This work illustrates a method for producing CNT FETs with controlled values of the turnoff gate voltage, and more generally, these results will inform the development of quantitative models for the response of CNT FET chemical and biochemical sensors. As an example, the results of an experiment detecting biomarkers of Lyme disease will be discussed in the context of this model.
Intelmann, Daniel; Demmer, Oliver; Desmer, Nina; Hofmann, Thomas
2009-11-25
The typical bitterness of fresh beer is well-known to decrease in intensity and to change in quality with increasing age. This phenomenon was recently shown to be caused by the conversion of bitter tasting trans-iso-alpha-acids into lingering and harsh bitter tasting tri- and tetracyclic degradation products such as tricyclocohumol, tricyclocohumene, isotricyclocohumene, tetracyclocohumol, and epitetracyclocohumol. Interestingly, the formation of these compounds was shown to be trans-specific and the corresponding cis-iso-alpha-acids were found to be comparatively stable. Application of 18O stable isotope labeling as well as quantitative model studies combined with LC-MS/MS experiments, followed by computer-based molecular dynamics simulations revealed for the first time a conclusive mechanism explaining the stereospecific transformation of trans-iso-alpha-acids into the tri- and tetracyclic degradation products. This transformation was proposed to be induced by a proton-catalyzed carbon/carbon bond formation between the carbonyl atom C(1') of the isohexenoyl moiety and the alkene carbon C(2'') of the isoprenyl moiety of the trans-iso-alpha-acids.
Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe
2013-08-01
Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Hong-Fei; Gan, Wei; Lu, Rong; Rao, Yi; Wu, Bao-Hua
Sum frequency generation vibrational spectroscopy (SFG-VS) has been proven to be a uniquely effective spectroscopic technique in the investigation of molecular structure and conformations, as well as the dynamics of molecular interfaces. However, the ability to apply SFG-VS to complex molecular interfaces has been limited by the ability to abstract quantitative information from SFG-VS experiments. In this review, we try to make assessments of the limitations, issues and techniques as well as methodologies in quantitative orientational and spectral analysis with SFG-VS. Based on these assessments, we also try to summarize recent developments in methodologies on quantitative orientational and spectral analysis in SFG-VS, and their applications to detailed analysis of SFG-VS data of various vapour/neat liquid interfaces. A rigorous formulation of the polarization null angle (PNA) method is given for accurate determination of the orientational parameter D =
Residual Stress Analysis in Welded Component.
NASA Astrophysics Data System (ADS)
Rouhi, Shahab; Yoshida, Sanichiro; Miura, Fumiya; Sasaki, Tomohiro
Due to local heating, thermal stresses occur during welding; and residual stress and distortion result remain welding. Welding distortion has negative effects on the accuracy of assembly, exterior appearance, and various strengths of the welded structures. Up to date, a lot of experiments and numerical analysis have been developed to assess residual stress. However, quantitative estimation of residual stress based on experiment may involve massive uncertainties and complexity of the measurement process. To comprehensively understand this phenomena, it is necessary to do further researches by means of both experiment and numerical simulation. In this research, we conduct Finite Element Analysis (FEA) for a simple butt-welded metal plate specimen. Thermal input and resultant expansion are modeled with a thermal expansion FEA module and the resultant constitutive response of the material is modeled with a continuous mechanic FEA module. The residual stress is modeled based on permanent deformation occurring during the heating phase of the material. Experiments have also been carried out to compare with the FEA results. Numerical and experimental results show qualitative agreement. The present work was supported by the Louisiana Board of Regents (LEQSF(2016-17)-RD-C-13).
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
Singularities in x-ray spectra of metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahan, G.D.
1987-08-01
The x-ray spectroscopies discussed are absorption, emission, and photoemission. The singularities show up in each of them in a different manner. In absorption and emission they show up as power law singularities at the thresholds frequencies. This review will emphasize two themes. First a simple model is proposed to describe this phenomena, which is now called the MND model after MAHAN-NOZIERES-DeDOMINICIS. Exact analytical solutions are now available for this model for the three spectroscopies discussed above. These analytical models can be evaluated numerically in a simple way. The second theme of this review is that great care must be usedmore » when comparing the theory to experiment. A number of factors influence the edge shapes in x-ray spectroscopy. The edge singularities play an important role, and are observed in many matals. Quantitative fits of the theory to experiment require the consideration of other factors. 51 refs.« less
Establishing a Novel Modeling Tool: A Python-Based Interface for a Neuromorphic Hardware System
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2008-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated. PMID:19562085
Establishing a novel modeling tool: a python-based interface for a neuromorphic hardware system.
Brüderle, Daniel; Müller, Eric; Davison, Andrew; Muller, Eilif; Schemmel, Johannes; Meier, Karlheinz
2009-01-01
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.
Heavy-Ion Microbeam Fault Injection into SRAM-Based FPGA Implementations of Cryptographic Circuits
NASA Astrophysics Data System (ADS)
Li, Huiyun; Du, Guanghua; Shao, Cuiping; Dai, Liang; Xu, Guoqing; Guo, Jinlong
2015-06-01
Transistors hit by heavy ions may conduct transiently, thereby introducing transient logic errors. Attackers can exploit these abnormal behaviors and extract sensitive information from the electronic devices. This paper demonstrates an ion irradiation fault injection attack experiment into a cryptographic field-programmable gate-array (FPGA) circuit. The experiment proved that the commercial FPGA chip is vulnerable to low-linear energy transfer carbon irradiation, and the attack can cause the leakage of secret key bits. A statistical model is established to estimate the possibility of an effective fault injection attack on cryptographic integrated circuits. The model incorporates the effects from temporal, spatial, and logical probability of an effective attack on the cryptographic circuits. The rate of successful attack calculated from the model conforms well to the experimental results. This quantitative success rate model can help evaluate security risk for designers as well as for the third-party assessment organizations.
Some-or-none recollection: Evidence from item and source memory.
Onyper, Serge V; Zhang, Yaofei X; Howard, Marc W
2010-05-01
Dual-process theory hypothesizes that recognition memory depends on 2 distinguishable memory signals. Recollection reflects conscious recovery of detailed information about the learning episode. Familiarity reflects a memory signal that is not accompanied by a vivid conscious experience but nonetheless enables participants to distinguish recently experienced probe items from novel ones. This dual-process explanation of recognition memory has gained wide acceptance among cognitive neuroscientists and some cognitive psychologists. Nonetheless, its difficulty in providing a quantitatively satisfactory description of performance has precluded a consensus not only regarding the theoretical structure of recognition memory but also about how to best measure recognition accuracy. In 2 experiments we show that neither the standard formulation of dual-process signal detection (DPSD) theory nor a widely used single-process model called the unequal-variance signal-detection (UVSD) model provides a satisfactory explanation of recognition memory across different types of stimuli (words and travel scenes). In the variable-recollection dual-process (VRDP) model, recollection fails for some old probe items, as in standard formulations of DPSD, but gives rise to a continuous distribution of memory strengths when it succeeds. The VRDP can approximate both the DPSD and UVSD. In both experiments it provides a consistently superior fit across materials to the superset of the DPSD and UVSD. The VRDP offers a simple explanation of the form of conjoint item-source judgments, something neither the DPSD nor UVSD accomplishes. The success of the VRDP supports the core assumptions of dual-process theory by providing an excellent quantitative description of recognition performance across materials and response criteria.
The Ars Moriendi Model for Spiritual Assessment: A Mixed-Methods Evaluation.
Vermandere, Mieke; Warmenhoven, Franca; Van Severen, Evie; De Lepeleire, Jan; Aertgeerts, Bert
2015-07-01
To explore nurses' and physicians' experiences with the ars moriendi model (AMM) for spiritual assessment. Convergent, parallel, mixed-methods. Palliative home care in Belgium. 17 nurses and 4 family physicians (FPs) in the quantitative phase, and 19 nurses and 5 FPs in the later qualitative phase. A survey was used to investigate first impressions after a spiritual assessment. Descriptive statistics were applied for the analysis of the survey. In a semistructured interview a few weeks later, nurses and physicians were asked to describe their experiences with using the AMM. Interviews were audio recorded, transcribed, and qualitatively analyzed. Quantitative and qualitative results were compared to see whether the findings were confirmative. The survey assessed the feasibility of the AMM for use in palliative home care, whereas the semistructured interviews collected in-depth descriptions of healthcare providers' (HCPs') experiences with the AMM. The AMM was perceived as valuable. Many patients shared their wishes and expectations about the end of life. Most HCPs said they felt that the patient-provider relationship had been strengthened as a result of the spiritual assessment. Almost all assessments raised new issues; however, many dyads had informally discussed spiritual issues before. The current study suggests that HCPs believe that the AMM is a useful spiritual assessment tool. Guided by the model, HCPs can gather information about the context, life story, and meaningful connections of patients, which enables them to facilitate person-centered care. The AMM appears to be an important tool for spiritual assessment that can offer more insight into patients' spirituality and help nurses to establish person-centered end-of-life care.
Comparative assessment of fluorescent proteins for in vivo imaging in an animal model system.
Heppert, Jennifer K; Dickinson, Daniel J; Pani, Ariel M; Higgins, Christopher D; Steward, Annette; Ahringer, Julie; Kuhn, Jeffrey R; Goldstein, Bob
2016-11-07
Fluorescent protein tags are fundamental tools used to visualize gene products and analyze their dynamics in vivo. Recent advances in genome editing have expedited the precise insertion of fluorescent protein tags into the genomes of diverse organisms. These advances expand the potential of in vivo imaging experiments and facilitate experimentation with new, bright, photostable fluorescent proteins. Most quantitative comparisons of the brightness and photostability of different fluorescent proteins have been made in vitro, removed from biological variables that govern their performance in cells or organisms. To address the gap, we quantitatively assessed fluorescent protein properties in vivo in an animal model system. We generated transgenic Caenorhabditis elegans strains expressing green, yellow, or red fluorescent proteins in embryos and imaged embryos expressing different fluorescent proteins under the same conditions for direct comparison. We found that mNeonGreen was not as bright in vivo as predicted based on in vitro data but is a better tag than GFP for specific kinds of experiments, and we report on optimal red fluorescent proteins. These results identify ideal fluorescent proteins for imaging in vivo in C. elegans embryos and suggest good candidate fluorescent proteins to test in other animal model systems for in vivo imaging experiments. © 2016 Heppert et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Hurricanes and Climate: the U.S. CLIVAR Working Group on Hurricanes
NASA Technical Reports Server (NTRS)
Walsh, Kevin; Camargo, Suzana J.; Vecchi, Gabriel A.; Daloz, Anne Sophie; Elsner, James; Emanuel, Kerry; Horn, Michael; Lim, Young-Kwon; Roberts, Malcolm; Patricola, Christina;
2015-01-01
While a quantitative climate theory of tropical cyclone formation remains elusive, considerable progress has been made recently in our ability to simulate tropical cyclone climatologies and understand the relationship between climate and tropical cyclone formation. Climate models are now able to simulate a realistic rate of global tropical cyclone formation, although simulation of the Atlantic tropical cyclone climatology remains challenging unless horizontal resolutions finer than 50 km are employed. The idealized experiments of the Hurricane Working Group of U.S. CLIVAR, combined with results from other model simulations, have suggested relationships between tropical cyclone formation rates and climate variables such as mid-tropospheric vertical velocity. Systematic differences are shown between experiments in which only sea surface temperature is increases versus experiments where only atmospheric carbon dioxide is increased, with the carbon dioxide experiments more likely to demonstrate a decrease in numbers. Further experiments are proposed that may improve our understanding of the relationship between climate and tropical cyclone formation, including experiments with two-way interaction between the ocean and the atmosphere and variations in atmospheric aerosols.
The visual discrimination of bending.
Norman, J Farley; Wiesemann, Elizabeth Y; Norman, Hideko F; Taylor, M Jett; Craft, Warren D
2007-01-01
The sensitivity of observers to nonrigid bending was evaluated in two experiments. In both experiments, observers were required to discriminate on any given trial which of two bending rods was more elastic. In experiment 1, both rods bent within the same oriented plane, and bent either in a frontoparallel plane or bent in depth. In experiment 2, the two rods within any given trial bent in different, randomly chosen orientations in depth. The results of both experiments revealed that human observers are sensitive to, and can reliably detect, relatively small differences in bending (the average Weber fraction across experiments 1 and 2 was 9.0%). The performance of the human observers was compared to that of models that based their elasticity judgments upon either static projected curvature or mean and maximal projected speed. Despite the fact that all of the observers reported compelling 3-D perceptions of bending in depth, their judgments were both qualitatively and quantitatively consistent with the performance of the models. This similarity suggests that relatively straightforward information about the elasticity of simple bending objects is available in projected retinal images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, Paolo; Theiler, C.; Fasoli, A.
A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less
Biomechanics-based in silico medicine: the manifesto of a new science.
Viceconti, Marco
2015-01-21
In this perspective article we discuss the role of contemporary biomechanics in the light of recent applications such as the development of the so-called Virtual Physiological Human technologies for physiology-based in silico medicine. In order to build Virtual Physiological Human (VPH) models, computer models that capture and integrate the complex systemic dynamics of living organisms across radically different space-time scales, we need to re-formulate a vast body of existing biology and physiology knowledge so that it is formulated as a quantitative hypothesis, which can be expressed in mathematical terms. Once the predictive accuracy of these models is confirmed against controlled experiments and against clinical observations, we will have VPH model that can reliably predict certain quantitative changes in health status of a given patient, but also, more important, we will have a theory, in the true meaning this word has in the scientific method. In this scenario, biomechanics plays a very important role, biomechanics is one of the few areas of life sciences where we attempt to build full mechanistic explanations based on quantitative observations, in other words, we investigate living organisms like physical systems. This is in our opinion a Copernican revolution, around which the scope of biomechanics should be re-defined. Thus, we propose a new definition for our research domain "Biomechanics is the study of living organisms as mechanistic systems". Copyright © 2014 Elsevier Ltd. All rights reserved.
Contextual Advantage for State Discrimination
NASA Astrophysics Data System (ADS)
Schmid, David; Spekkens, Robert W.
2018-02-01
Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.
The attentional drift-diffusion model extends to simple purchasing decisions.
Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio
2012-01-01
How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions.
The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions
Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio
2012-01-01
How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions. PMID:22707945
Single cell model for simultaneous drug delivery and efflux.
Yi, C; Saidel, G M; Gratzl, M
1999-01-01
Multidrug resistance (MDR) of some cancer cells is a major challenge for chemotherapy of systemic cancers to overcome. To experimentally uncover the cellular mechanisms leading to MDR, it is necessary to quantitatively assess both drug influx into, and efflux from, the cells exposed to drug treatment. By using a novel molecular microdelivery system to enforce continuous and adjustable drug influx into single cells by controlled diffusion through a gel plug in a micropipet tip, drug resistance studies can now be performed on the single cell level. Our dynamic model of this scheme incorporates drug delivery, diffusive mixing, and accumulation inside the cytoplasm, and efflux by both passive and active membrane transport. Model simulations using available experimental information on these processes can assist in the design of MDR related experiments on single cancer cells which are expected to lead to a quantitative evaluation of mechanisms. Simulations indicate that drug resistance of a cancer cell can be quantified better by its dynamic response than by steady-state analysis.
NASA Astrophysics Data System (ADS)
Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun
2017-11-01
Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.
EMC3-EIRENE modelling of toroidally-localized divertor gas injection experiments on Alcator C-Mod
Lore, Jeremy D.; Reinke, M. L.; LaBombard, Brian; ...
2014-09-30
Experiments on Alcator C-Mod with toroidally and poloidally localized divertor nitrogen injection have been modeled using the three-dimensional edge transport code EMC3-EIRENE to elucidate the mechanisms driving measured toroidal asymmetries. In these experiments five toroidally distributed gas injectors in the private flux region were sequentially activated in separate discharges resulting in clear evidence of toroidal asymmetries in radiated power and nitrogen line emission as well as a ~50% toroidal modulation in electron pressure at the divertor target. The pressure modulation is qualitatively reproduced by the modelling, with the simulation yielding a toroidal asymmetry in the heat flow to the outermore » strike point. Finally, toroidal variation in impurity line emission is qualitatively matched in the scrape-off layer above the strike point, however kinetic corrections and cross-field drifts are likely required to quantitatively reproduce impurity behavior in the private flux region and electron temperatures and densities directly in front of the target.« less
Statistical design of quantitative mass spectrometry-based proteomic experiments.
Oberg, Ann L; Vitek, Olga
2009-05-01
We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.
da Costa, Renata Souza; Bicca-Marques, Júlio César
2014-01-01
Foraging at night imposes different challenges from those faced during daylight, including the reliability of sensory cues. Owl monkeys (Aotus spp.) are ideal models among anthropoids to study the information used during foraging at low light levels because they are unique by having a nocturnal lifestyle. Six Aotus nigriceps and four A. infulatus individuals distributed into five enclosures were studied for testing their ability to rely on olfactory, visual, auditory, or spatial and quantitative information for locating food rewards and for evaluating the use of routes to navigate among five visually similar artificial feeding boxes mounted in each enclosure. During most experiments only a single box was baited with a food reward in each session. The baited box changed randomly throughout the experiment. In the spatial and quantitative information experiment there were two baited boxes varying in the amount of food provided. These baited boxes remained the same throughout the experiment. A total of 45 sessions (three sessions per night during 15 consecutive nights) per enclosure was conducted in each experiment. Only one female showed a performance suggestive of learning of the usefulness of sight to locate the food reward in the visual information experiment. Subjects showed a chance performance in the remaining experiments. All owl monkeys showed a preference for one box or a subset of boxes to inspect upon the beginning of each experimental session and consistently followed individual routes among feeding boxes. PMID:25517894
da Costa, Renata Souza; Bicca-Marques, Júlio César
2014-01-01
Foraging at night imposes different challenges from those faced during daylight, including the reliability of sensory cues. Owl monkeys (Aotus spp.) are ideal models among anthropoids to study the information used during foraging at low light levels because they are unique by having a nocturnal lifestyle. Six Aotus nigriceps and four A. infulatus individuals distributed into five enclosures were studied for testing their ability to rely on olfactory, visual, auditory, or spatial and quantitative information for locating food rewards and for evaluating the use of routes to navigate among five visually similar artificial feeding boxes mounted in each enclosure. During most experiments only a single box was baited with a food reward in each session. The baited box changed randomly throughout the experiment. In the spatial and quantitative information experiment there were two baited boxes varying in the amount of food provided. These baited boxes remained the same throughout the experiment. A total of 45 sessions (three sessions per night during 15 consecutive nights) per enclosure was conducted in each experiment. Only one female showed a performance suggestive of learning of the usefulness of sight to locate the food reward in the visual information experiment. Subjects showed a chance performance in the remaining experiments. All owl monkeys showed a preference for one box or a subset of boxes to inspect upon the beginning of each experimental session and consistently followed individual routes among feeding boxes.
Hünniger, Kerstin; Lehnert, Teresa; Bieber, Kristin; Martin, Ronny; Figge, Marc Thilo; Kurzai, Oliver
2014-02-01
Candida albicans bloodstream infection is increasingly frequent and can result in disseminated candidiasis associated with high mortality rates. To analyze the innate immune response against C. albicans, fungal cells were added to human whole-blood samples. After inoculation, C. albicans started to filament and predominantly associate with neutrophils, whereas only a minority of fungal cells became attached to monocytes. While many parameters of host-pathogen interaction were accessible to direct experimental quantification in the whole-blood infection assay, others were not. To overcome these limitations, we generated a virtual infection model that allowed detailed and quantitative predictions on the dynamics of host-pathogen interaction. Experimental time-resolved data were simulated using a state-based modeling approach combined with the Monte Carlo method of simulated annealing to obtain quantitative predictions on a priori unknown transition rates and to identify the main axis of antifungal immunity. Results clearly demonstrated a predominant role of neutrophils, mediated by phagocytosis and intracellular killing as well as the release of antifungal effector molecules upon activation, resulting in extracellular fungicidal activity. Both mechanisms together account for almost [Formula: see text] of C. albicans killing, clearly proving that beside being present in larger numbers than other leukocytes, neutrophils functionally dominate the immune response against C. albicans in human blood. A fraction of C. albicans cells escaped phagocytosis and remained extracellular and viable for up to four hours. This immune escape was independent of filamentation and fungal activity and not linked to exhaustion or inactivation of innate immune cells. The occurrence of C. albicans cells being resistant against phagocytosis may account for the high proportion of dissemination in C. albicans bloodstream infection. Taken together, iterative experiment-model-experiment cycles allowed quantitative analyses of the interplay between host and pathogen in a complex environment like human blood.
Electrical model of cold atmospheric plasma gun
NASA Astrophysics Data System (ADS)
Slutsker, Ya. Z.; Semenov, V. E.; Krasik, Ya. E.; Ryzhkov, M. A.; Felsteiner, J.; Binenbaum, Y.; Gil, Z.; Shtrichman, R.; Cohen, J. T.
2017-10-01
We present an analytical model of cold atmospheric plasma formed by a dielectric barrier discharge (DBD), which is based on the lumped and distributed elements of an equivalent electric circuit of this plasma. This model is applicable for a wide range of frequencies and amplitudes of the applied voltage pulses, no matter whether or not the generated plasma plume interacts with a target. The model allows quantitative estimation of the plasma plume length and the energy delivered to the plasma. Also, the results of this model can be used for the design of DBD guns which efficiently generate cold atmospheric plasma. A comparison of the results of the model with those obtained in experiments shows a fairly good agreement.
Optimization of Statistical Methods Impact on Quantitative Proteomics Data.
Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L
2015-10-02
As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.
Improving fast-ion confinement in high-performance discharges by suppressing Alfvén eigenmodes
Kramer, Geritt J.; Podestà, Mario; Holcomb, Christopher; ...
2017-03-28
Here, we show that the degradation of fast-ion confinement in steady-state DIII-D discharges is quantitatively consistent with predictions based on the effects of multiple unstable Alfven eigenmodes on beam-ion transport. Simulation and experiment show that increasing the radius where the magnetic safety factor has its minimum is effective in minimizing beam-ion transport. This is favorable for achieving high performance steady-state operation in DIII-D and future reactors. A comparison between the experiments and a critical gradient model, in which only equilibrium profiles were used to predict the most unstable modes, show that in a number of cases this model reproduces themore » measured neutron rate well.« less
Du, Hongying; Wang, Jie; Yao, Xiaojun; Hu, Zhide
2009-01-01
The heuristic method (HM) and support vector machine (SVM) were used to construct quantitative structure-retention relationship models by a series of compounds to predict the gradient retention times of reversed-phase high-performance liquid chromatography (HPLC) in three different columns. The aims of this investigation were to predict the retention times of multifarious compounds, to find the main properties of the three columns, and to indicate the theory of separation procedures. In our method, we correlated the retention times of many diverse structural analytes in three columns (Symmetry C18, Chromolith, and SG-MIX) with their representative molecular descriptors, calculated from the molecular structures alone. HM was used to select the most important molecular descriptors and build linear regression models. Furthermore, non-linear regression models were built using the SVM method; the performance of the SVM models were better than that of the HM models, and the prediction results were in good agreement with the experimental values. This paper could give some insights into the factors that were likely to govern the gradient retention process of the three investigated HPLC columns, which could theoretically supervise the practical experiment.
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
NASA Astrophysics Data System (ADS)
Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.
Continued Development and Validation of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2015-11-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks; determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and provide an intermediate step between theory and future experiments. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (~ 36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. Results from verification of the PSI-TET extended MHD model using the GEM magnetic reconnection challenge will also be presented along with investigation of injector configurations for future SIHI experiments using Taylor state equilibrium calculations. Work supported by DoE.
The TAME Project: Towards improvement-oriented software environments
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Rombach, H. Dieter
1988-01-01
Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture.
Quantitative ptychographic reconstruction by applying a probe constraint
NASA Astrophysics Data System (ADS)
Reinhardt, J.; Schroer, C. G.
2018-04-01
The coherent scanning technique X-ray ptychography has become a routine tool for high-resolution imaging and nanoanalysis in various fields of research such as chemistry, biology or materials science. Often the ptychographic reconstruction results are analysed in order to yield absolute quantitative values for the object transmission and illuminating probe function. In this work, we address a common ambiguity encountered in scaling the object transmission and probe intensity via the application of an additional constraint to the reconstruction algorithm. A ptychographic measurement of a model sample containing nanoparticles is used as a test data set against which to benchmark in the reconstruction results depending on the type of constraint used. Achieving quantitative absolute values for the reconstructed object transmission is essential for advanced investigation of samples that are changing over time, e.g., during in-situ experiments or in general when different data sets are compared.
Bethge, Anja; Schumacher, Udo
2017-01-01
Background Tumor vasculature is critical for tumor growth, formation of distant metastases and efficiency of radio- and chemotherapy treatments. However, how the vasculature itself is affected during cancer treatment regarding to the metastatic behavior has not been thoroughly investigated. Therefore, the aim of this study was to analyze the influence of hypofractionated radiotherapy and cisplatin chemotherapy on vessel tree geometry and metastasis formation in a small cell lung cancer xenograft mouse tumor model to investigate the spread of malignant cells during different treatments modalities. Methods The biological data gained during these experiments were fed into our previously developed computer model “Cancer and Treatment Simulation Tool” (CaTSiT) to model the growth of the primary tumor, its metastatic deposit and also the influence on different therapies. Furthermore, we performed quantitative histology analyses to verify our predictions in xenograft mouse tumor model. Results According to the computer simulation the number of cells engrafting must vary considerably to explain the different weights of the primary tumor at the end of the experiment. Once a primary tumor is established, the fractal dimension of its vasculature correlates with the tumor size. Furthermore, the fractal dimension of the tumor vasculature changes during treatment, indicating that the therapy affects the blood vessels’ geometry. We corroborated these findings with a quantitative histological analysis showing that the blood vessel density is depleted during radiotherapy and cisplatin chemotherapy. The CaTSiT computer model reveals that chemotherapy influences the tumor’s therapeutic susceptibility and its metastatic spreading behavior. Conclusion Using a system biological approach in combination with xenograft models and computer simulations revealed that the usage of chemotherapy and radiation therapy determines the spreading behavior by changing the blood vessel geometry of the primary tumor. PMID:29107953
Path analysis of the genetic integration of traits in the sand cricket: a novel use of BLUPs.
Roff, D A; Fairbairn, D J
2011-09-01
This study combines path analysis with quantitative genetics to analyse a key life history trade-off in the cricket, Gryllus firmus. We develop a path model connecting five traits associated with the trade-off between flight capability and reproduction and test this model using phenotypic data and estimates of breeding values (best linear unbiased predictors) from a half-sibling experiment. Strong support by both types of data validates our causal model and indicates concordance between the phenotypic and genetic expression of the trade-off. Comparisons of the trade-off between sexes and wing morphs reveal that these discrete phenotypes are not genetically independent and that the evolutionary trajectories of the two wing morphs are more tightly constrained to covary than those of the two sexes. Our results illustrate the benefits of combining a quantitative genetic analysis, which examines statistical correlations between traits, with a path model that focuses upon the causal components of variation. © 2011 The Authors. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.
Planner-Based Control of Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Kortenkamp, David; Fry, Chuck; Bell, Scott
2005-01-01
The paper describes an approach to the integration of qualitative and quantitative modeling techniques for advanced life support (ALS) systems. Developing reliable control strategies that scale up to fully integrated life support systems requires augmenting quantitative models and control algorithms with the abstractions provided by qualitative, symbolic models and their associated high-level control strategies. This will allow for effective management of the combinatorics due to the integration of a large number of ALS subsystems. By focusing control actions at different levels of detail and reactivity we can use faster: simpler responses at the lowest level and predictive but complex responses at the higher levels of abstraction. In particular, methods from model-based planning and scheduling can provide effective resource management over long time periods. We describe reference implementation of an advanced control system using the IDEA control architecture developed at NASA Ames Research Center. IDEA uses planning/scheduling as the sole reasoning method for predictive and reactive closed loop control. We describe preliminary experiments in planner-based control of ALS carried out on an integrated ALS simulation developed at NASA Johnson Space Center.
Modeling Electronegative Impurity Concentrations in Liquid Argon Detectors
NASA Astrophysics Data System (ADS)
Tang, Wei; Li, Yichen; Thorn, Craig; Qian, Xin
2017-01-01
Achieving long electron lifetime is crucial to reach the high performance of large Liquid Argon Time Projection Chamber (LArTPC) envisioned for next generation neutrino experiments. We have built up a quantitative model to describe the impurity distribution and transportation in a cryostat. Henrys constants of Oxygen and water, which describe the partition of impurities between gas argon and liquid argon, have been deduced through this model with the measurements in BNL 20-L LAr test stand. These results indicate the importance of the gas purification system and prospects on large LArTPC detectors will be discussed.
Robustness and flexibility in nematode vulva development.
Félix, Marie-Anne; Barkoulas, Michalis
2012-04-01
The Caenorhabditis elegans vulva has served as a paradigm for how conserved developmental pathways, such as EGF-Ras-MAPK, Notch and Wnt signaling, participate in networks driving animal organogenesis. Here, we discuss an emerging direction in the field, which places vulva research in a quantitative and microevolutionary framework. The final vulval cell fate pattern is known to be robust to change, but only recently has the variation of vulval traits been measured under stochastic, environmental or genetic variation. Whereas the resulting cell fate pattern is invariant among rhabditid nematodes, recent studies indicate that the developmental system has accumulated cryptic variation, even among wild C. elegans isolates. Quantitative differences in the signaling network have emerged through experiments and modeling as the driving force behind cryptic variation in Caenorhabditis species. On a wider evolutionary scale, the establishment of new model species has informed about the presence of qualitative variation in vulval signaling pathways. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Cheng; Song, Pengfei; Meng, Fanchao; Li, Xiao; Liu, Xinyu; Song, Jun
2017-12-01
The present work presents a quantitative modeling framework for investigating the self-rolling of nanomembranes under different lattice mismatch strain anisotropy. The effect of transverse mismatch strain on the roll-up direction and curvature has been systematically studied employing both analytical modeling and numerical simulations. The bidirectional nature of the self-rolling of nanomembranes and the critical role of transverse strain in affecting the rolling behaviors have been demonstrated. Two fabrication strategies, i.e., third-layer deposition and corner geometry engineering, have been proposed to predictively manipulate the bidirectional rolling competition of strained nanomembranes, so as to achieve controlled, unidirectional roll-up. In particular for the strategy of corner engineering, microfabrication experiments have been performed to showcase its practical application and effectiveness. Our study offers new mechanistic knowledge towards understanding and predictive engineering of self-rolling of nanomembranes with improved roll-up yield.
Design and interpretation of cell trajectory assays
Bowden, Lucie G.; Simpson, Matthew J.; Baker, Ruth E.
2013-01-01
Cell trajectory data are often reported in the experimental cell biology literature to distinguish between different types of cell migration. Unfortunately, there is no accepted protocol for designing or interpreting such experiments and this makes it difficult to quantitatively compare different published datasets and to understand how changes in experimental design influence our ability to interpret different experiments. Here, we use an individual-based mathematical model to simulate the key features of a cell trajectory experiment. This shows that our ability to correctly interpret trajectory data is extremely sensitive to the geometry and timing of the experiment, the degree of motility bias and the number of experimental replicates. We show that cell trajectory experiments produce data that are most reliable when the experiment is performed in a quasi-one-dimensional geometry with a large number of identically prepared experiments conducted over a relatively short time-interval rather than a few trajectories recorded over particularly long time-intervals. PMID:23985736
NASA Astrophysics Data System (ADS)
Wünnemann, Kai; Zhu, Meng-Hua; Stöffler, Dieter
2016-10-01
We investigated the ejection mechanics by a complementary approach of cratering experiments, including the microscopic analysis of material sampled from these experiments, and 2-D numerical modeling of vertical impacts. The study is based on cratering experiments in quartz sand targets performed at the NASA Ames Vertical Gun Range. In these experiments, the preimpact location in the target and the final position of ejecta was determined by using color-coded sand and a catcher system for the ejecta. The results were compared with numerical simulations of the cratering and ejection process to validate the iSALE shock physics code. In turn the models provide further details on the ejection velocities and angles. We quantify the general assumption that ejecta thickness decreases with distance according to a power-law and that the relative proportion of shocked material in the ejecta increase with distance. We distinguish three types of shock metamorphic particles (1) melt particles, (2) shock lithified aggregates, and (3) shock-comminuted grains. The agreement between experiment and model was excellent, which provides confidence that the models can predict ejection angles, velocities, and the degree of shock loading of material expelled from a crater accurately if impact parameters such as impact velocity, impactor size, and gravity are varied beyond the experimental limitations. This study is relevant for a quantitative assessment of impact gardening on planetary surfaces and the evolution of regolith layers on atmosphereless bodies.
The SAGE Model of Social Psychological Research
Power, Séamus A.; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-01-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed. PMID:29361241
Small-amplitude acoustics in bulk granular media
NASA Astrophysics Data System (ADS)
Henann, David L.; Valenza, John J., II; Johnson, David L.; Kamrin, Ken
2013-10-01
We propose and validate a three-dimensional continuum modeling approach that predicts small-amplitude acoustic behavior of dense-packed granular media. The model is obtained through a joint experimental and finite-element study focused on the benchmark example of a vibrated container of grains. Using a three-parameter linear viscoelastic constitutive relation, our continuum model is shown to quantitatively predict the effective mass spectra in this geometry, even as geometric parameters for the environment are varied. Further, the model's predictions for the surface displacement field are validated mode-by-mode against experiment. A primary observation is the importance of the boundary condition between grains and the quasirigid walls.
NASA Astrophysics Data System (ADS)
Nguyen, Baochi; Upadhyaya, Arpita; van Oudenaarden, Alexander; Brenner, Michael
2002-11-01
It is well known that the Young's law and surface tension govern the shape of liquid droplets on solid surfaces. Here we address through experiments and theory the shape of growing aggregates of yeast on agar substrates, and assess whether these ideas still hold. Experiments are carried out on Baker's yeast, with different levels of expressions of an adhesive protein governing cell-cell and cell-substrate adhesion. Changing either the agar concentration or the expression of this protein modifies the local contact angle of a yeast droplet. When the colony is small, the shape is a spherical cap with the contact angle obeying Young's law. However, above a critical volume this structure is unstable, and the droplet becomes nonspherical. We present a theoretical model where this instability is caused by bulk elastic effects. The model predicts that the transition depends on both volume and contact angle, in a manner quantitatively consistent with our experiments.
A Monte Carlo software for the 1-dimensional simulation of IBIC experiments
NASA Astrophysics Data System (ADS)
Forneris, J.; Jakšić, M.; Pastuović, Ž.; Vittone, E.
2014-08-01
The ion beam induced charge (IBIC) microscopy is a valuable tool for the analysis of the electronic properties of semiconductors. In this work, a recently developed Monte Carlo approach for the simulation of IBIC experiments is presented along with a self-standing software equipped with graphical user interface. The method is based on the probabilistic interpretation of the excess charge carrier continuity equations and it offers to the end-user the full control not only of the physical properties ruling the induced charge formation mechanism (i.e., mobility, lifetime, electrostatics, device's geometry), but also of the relevant experimental conditions (ionization profiles, beam dispersion, electronic noise) affecting the measurement of the IBIC pulses. Moreover, the software implements a novel model for the quantitative evaluation of the radiation damage effects on the charge collection efficiency degradation of ion-beam-irradiated devices. The reliability of the model implementation is then validated against a benchmark IBIC experiment.
Semantic Coherence Facilitates Distributional Learning.
Ouyang, Long; Boroditsky, Lera; Frank, Michael C
2017-04-01
Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of association with other words (e.g., they both tend to occur with words like "deliver," "truck," "package"). In contrast to these computational results, artificial language learning experiments suggest that distributional statistics alone do not facilitate learning of linguistic categories. However, experiments in this paradigm expose participants to entirely novel words, whereas real language learners encounter input that contains some known words that are semantically organized. In three experiments, we show that (a) the presence of familiar semantic reference points facilitates distributional learning and (b) this effect crucially depends both on the presence of known words and the adherence of these known words to some semantic organization. Copyright © 2016 Cognitive Science Society, Inc.
Qualitative and Quantitative Distinctions in Personality Disorder
Wright, Aidan G. C.
2011-01-01
The “categorical-dimensional debate” has catalyzed a wealth of empirical advances in the study of personality pathology. However, this debate is merely one articulation of a broader conceptual question regarding whether to define and describe psychopathology as a quantitatively extreme expression of normal functioning or as qualitatively distinct in its process. In this paper I argue that dynamic models of personality (e.g., object-relations, cognitive-affective processing system) offer the conceptual scaffolding to reconcile these seemingly incompatible approaches to characterizing the relationship between normal and pathological personality. I propose that advances in personality assessment that sample behavior and experiences intensively provide the empirical techniques, whereas interpersonal theory offers an integrative theoretical framework, for accomplishing this goal. PMID:22804676
Improving Middle School Students’ Quantitative Literacy through Inquiry Lab and Group Investigation
NASA Astrophysics Data System (ADS)
Aisya, N. S. M.; Supriatno, B.; Saefudin; Anggraeni, S.
2017-02-01
The purpose of this study was to analyze the application of metacognitive strategies learning based Vee Diagram through Inquiry Lab and Group Investigation toward students’ quantitative literacy. This study compared two treatments on learning activity in middle school. The metacognitive strategies have applied to the content of environmental pollution at 7th grade. This study used a quantitative approach with quasi-experimental method. The research sample were the 7th grade students, involves 27 students in the experimental through Inquiry Lab and 27 students in the experimental through Group Investigation. The instruments that used in this research were pretest and posttest quantitative literacy skills, learning step observation sheets, and the questionnaire of teachers and students responses. As the result, N-gain average of pretest and posttest increased in both experimental groups. The average of posttest score was 61,11 for the Inquiry Lab and 54,01 to the Group Investigation. The average score of N-gain quantitative literacy skill of Inquiry Lab class was 0,492 and Group Investigation class was 0,426. Both classes of experiments showed an average N-gain in the medium category. The data has been analyzed statistically by using SPSS ver.23 and the results showed that although both the learning model can develop quantitative literacy, but there is not significantly different of improving students’ quantitative literacy between Inquiry Lab and Group Investigation in environmental pollution material.
Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast
Pang, Wei; Coghill, George M.
2015-01-01
In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. PMID:25864377
Quantitative influence of risk factors on blood glucose level.
Chen, Songjing; Luo, Senlin; Pan, Limin; Zhang, Tiemei; Han, Longfei; Zhao, Haixiu
2014-01-01
The aim of this study is to quantitatively analyze the influence of risk factors on the blood glucose level, and to provide theory basis for understanding the characteristics of blood glucose change and confirming the intervention index for type 2 diabetes. The quantitative method is proposed to analyze the influence of risk factors on blood glucose using back propagation (BP) neural network. Ten risk factors are screened first. Then the cohort is divided into nine groups by gender and age. According to the minimum error principle, nine BP models are trained respectively. The quantitative values of the influence of different risk factors on the blood glucose change can be obtained by sensitivity calculation. The experiment results indicate that weight is the leading cause of blood glucose change (0.2449). The second factors are cholesterol, age and triglyceride. The total ratio of these four factors reaches to 77% of the nine screened risk factors. And the sensitivity sequences can provide judgment method for individual intervention. This method can be applied to risk factors quantitative analysis of other diseases and potentially used for clinical practitioners to identify high risk populations for type 2 diabetes as well as other disease.
NASA Astrophysics Data System (ADS)
Williams, G. T.; Kennedy, B. M.; Wilson, T. M.; Fitzgerald, R. H.; Tsunematsu, K.; Teissier, A.
2017-09-01
Recent casualties in volcanic eruptions due to trauma from blocks and bombs necessitate more rigorous, ballistic specific risk assessment. Quantitative assessments are limited by a lack of experimental and field data on the vulnerability of buildings to ballistic hazards. An improved, quantitative understanding of building vulnerability to ballistic impacts is required for informing appropriate life safety actions and other risk reduction strategies. We assessed ballistic impacts to buildings from eruptions at Usu Volcano and Mt. Ontake in Japan and compiled available impact data from eruptions elsewhere to identify common damage patterns from ballistic impacts to buildings. We additionally completed a series of cannon experiments which simulate ballistic block impacts to building claddings to investigate their performance over a range of ballistic projectile velocities, masses and energies. Our experiments provide new insights by quantifying (1) the hazard associated with post-impact shrapnel from building and rock fragments; (2) the effect of impact obliquity on damage; and (3) the additional impact resistance buildings possess when claddings are struck in areas directly supported by framing components. This was not well identified in previous work which may have underestimated building vulnerability to ballistic hazards. To improve assessment of building vulnerability to ballistics, we use our experimental and field data to develop quantitative vulnerability models known as fragility functions. Our fragility functions and field studies show that although unreinforced buildings are highly vulnerable to large ballistics (> 20 cm diameter), they can still provide shelter, preventing death during eruptions.
ERIC Educational Resources Information Center
File, Carter L.
2013-01-01
This study was undertaken to understand whether a community or technical college chief business officer's career line influenced the lived experience of job satisfaction. This mixed method study was conducted in a two-phase approach using the Explanatory Design: Participant Selection Model variant. An initial quantitative survey was conducted from…
ERIC Educational Resources Information Center
Cheung, Alan C. K.; Yuen, Timothy W. W.
2016-01-01
The purpose of this paper was to examine the motives, the educational experiences, and the plan after graduation of a particular group of mainland Chinese students pursuing teacher education in Hong Kong by using a modified two-way push-and-pull model as our analytical framework. The study employed both quantitative and qualitative methods.…
ERIC Educational Resources Information Center
Rudolph, Michelle M.
2017-01-01
The education of student nurses is a complex endeavor involving the components of theory, skills, and clinical experiences during which the clinical instructor serves as a role model for socialization into the profession. Emotional intelligence, a skill that supports interpersonal relationships, enables the nursing clinical instructor to identify,…
NASA Technical Reports Server (NTRS)
Larson, David J.; Casagrande, Luis G.; DiMarzio, Don; Alexander, J. Iwan D.; Carlson, Fred; Lee, Taipo; Dudley, Michael; Raghathamachar, Balaji
1998-01-01
The Orbital Processing of High-Quality Doped and Alloyed CdTe Compound Semiconductors program was initiated to investigate, quantitatively, the influences of gravitationally dependent phenomena on the growth and quality of bulk compound semiconductors. The objective was to improve crystal quality (both structural and compositional) and to better understand and control the variables within the crystal growth production process. The empirical effort entailed the development of a terrestrial (one-g) experiment baseline for quantitative comparison with microgravity (mu-g) results. This effort was supported by the development of high-fidelity process models of heat transfer, fluid flow and solute redistribution, and thermo-mechanical stress occurring in the furnace, safety cartridge, ampoule, and crystal throughout the melting, seeding, crystal growth, and post-solidification processing. In addition, the sensitivity of the orbital experiments was analyzed with respect to the residual microgravity (mu-g) environment, both steady state and g-jitter. CdZnTe crystals were grown in one-g and in mu-g. Crystals processed terrestrially were grown at the NASA Ground Control Experiments Laboratory (GCEL) and at Grumman Aerospace Corporation (now Northrop Grumman Corporation). Two mu-g crystals were grown in the Crystal Growth Furnace (CGF) during the First United States Microgravity Laboratory Mission (USML-1), STS-50, June 24 - July 9, 1992.
Shrestha, Sourya; Foxman, Betsy; Dawid, Suzanne; Aiello, Allison E.; Davis, Brian M.; Berus, Joshua; Rohani, Pejman
2013-01-01
A significant fraction of seasonal and in particular pandemic influenza deaths are attributed to secondary bacterial infections. In animal models, influenza virus predisposes hosts to severe infection with both Streptococcus pneumoniae and Staphylococcus aureus. Despite its importance, the mechanistic nature of the interaction between influenza and pneumococci, its dependence on the timing and sequence of infections as well as the clinical and epidemiological consequences remain unclear. We explore an immune-mediated model of the viral–bacterial interaction that quantifies the timing and the intensity of the interaction. Taking advantage of the wealth of knowledge gained from animal models, and the quantitative understanding of the kinetics of pathogen-specific immunological dynamics, we formulate a mathematical model for immune-mediated interaction between influenza virus and S. pneumoniae in the lungs. We use the model to examine the pathogenic effect of inoculum size and timing of pneumococcal invasion relative to influenza infection, as well as the efficacy of antivirals in preventing severe pneumococcal disease. We find that our model is able to capture the key features of the interaction observed in animal experiments. The model predicts that introduction of pneumococcal bacteria during a 4–6 day window following influenza infection results in invasive pneumonia at significantly lower inoculum size than in hosts not infected with influenza. Furthermore, we find that antiviral treatment administered later than 4 days after influenza infection was not able to prevent invasive pneumococcal disease. This work provides a quantitative framework to study interactions between influenza and pneumococci and has the potential to accurately quantify the interactions. Such quantitative understanding can form a basis for effective clinical care, public health policies and pandemic preparedness. PMID:23825111
Daga, Pankaj R; Bolger, Michael B; Haworth, Ian S; Clark, Robert D; Martin, Eric J
2018-03-05
When medicinal chemists need to improve bioavailability (%F) within a chemical series during lead optimization, they synthesize new series members with systematically modified properties mainly by following experience and general rules of thumb. More quantitative models that predict %F of proposed compounds from chemical structure alone have proven elusive. Global empirical %F quantitative structure-property (QSPR) models perform poorly, and projects have too little data to train local %F QSPR models. Mechanistic oral absorption and physiologically based pharmacokinetic (PBPK) models simulate the dissolution, absorption, systemic distribution, and clearance of a drug in preclinical species and humans. Attempts to build global PBPK models based purely on calculated inputs have not achieved the <2-fold average error needed to guide lead optimization. In this work, local GastroPlus PBPK models are instead customized for individual medchem series. The key innovation was building a local QSPR for a numerically fitted effective intrinsic clearance (CL loc ). All inputs are subsequently computed from structure alone, so the models can be applied in advance of synthesis. Training CL loc on the first 15-18 rat %F measurements gave adequate predictions, with clear improvements up to about 30 measurements, and incremental improvements beyond that.
Kinetic Model of Growth of Arthropoda Populations
NASA Astrophysics Data System (ADS)
Ershov, Yu. A.; Kuznetsov, M. A.
2018-05-01
Kinetic equations were derived for calculating the growth of crustacean populations ( Crustacea) based on the biological growth model suggested earlier using shrimp ( Caridea) populations as an example. The development cycle of successive stages for populations can be represented in the form of quasi-chemical equations. The kinetic equations that describe the development cycle of crustaceans allow quantitative prediction of the development of populations depending on conditions. In contrast to extrapolation-simulation models, in the developed kinetic model of biological growth the kinetic parameters are the experimental characteristics of population growth. Verification and parametric identification of the developed model on the basis of the experimental data showed agreement with experiment within the error of the measurement technique.
CDMBE: A Case Description Model Based on Evidence
Zhu, Jianlin; Yang, Xiaoping; Zhou, Jing
2015-01-01
By combining the advantages of argument map and Bayesian network, a case description model based on evidence (CDMBE), which is suitable to continental law system, is proposed to describe the criminal cases. The logic of the model adopts the credibility logical reason and gets evidence-based reasoning quantitatively based on evidences. In order to consist with practical inference rules, five types of relationship and a set of rules are defined to calculate the credibility of assumptions based on the credibility and supportability of the related evidences. Experiments show that the model can get users' ideas into a figure and the results calculated from CDMBE are in line with those from Bayesian model. PMID:26421006
A collaborative molecular modeling environment using a virtual tunneling service.
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments.
Computational modeling of electrostatic charge and fields produced by hypervelocity impact
Crawford, David A.
2015-05-19
Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effectmore » at larger scales, higher impact velocities, or both.« less
Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal
2009-01-01
Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 μm aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy. PMID:20161301
Ma, Baoshun; Ruwet, Vincent; Corieri, Patricia; Theunissen, Raf; Riethmuller, Michel; Darquenne, Chantal
2009-05-01
Accurate modeling of air flow and aerosol transport in the alveolated airways is essential for quantitative predictions of pulmonary aerosol deposition. However, experimental validation of such modeling studies has been scarce. The objective of this study is to validate CFD predictions of flow field and particle trajectory with experiments within a scaled-up model of alveolated airways. Steady flow (Re = 0.13) of silicone oil was captured by particle image velocimetry (PIV), and the trajectories of 0.5 mm and 1.2 mm spherical iron beads (representing 0.7 to 14.6 mum aerosol in vivo) were obtained by particle tracking velocimetry (PTV). At twelve selected cross sections, the velocity profiles obtained by CFD matched well with those by PIV (within 1.7% on average). The CFD predicted trajectories also matched well with PTV experiments. These results showed that air flow and aerosol transport in models of human alveolated airways can be simulated by CFD techniques with reasonable accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinckley, Daniel M.; Freeman, Gordon S.; Whitmer, Jonathan K.
2013-10-14
A new 3-Site-Per-Nucleotide coarse-grained model for DNA is presented. The model includes anisotropic potentials between bases involved in base stacking and base pair interactions that enable the description of relevant structural properties, including the major and minor grooves. In an improvement over available coarse-grained models, the correct persistence length is recovered for both ssDNA and dsDNA, allowing for simulation of non-canonical structures such as hairpins. DNA melting temperatures, measured for duplexes and hairpins by integrating over free energy surfaces generated using metadynamics simulations, are shown to be in quantitative agreement with experiment for a variety of sequences and conditions. Hybridizationmore » rate constants, calculated using forward-flux sampling, are also shown to be in good agreement with experiment. The coarse-grained model presented here is suitable for use in biological and engineering applications, including nucleosome positioning and DNA-templated engineering.« less
Gini, Giuseppina
2016-01-01
In this chapter, we introduce the basis of computational chemistry and discuss how computational methods have been extended to some biological properties and toxicology, in particular. Since about 20 years, chemical experimentation is more and more replaced by modeling and virtual experimentation, using a large core of mathematics, chemistry, physics, and algorithms. Then we see how animal experiments, aimed at providing a standardized result about a biological property, can be mimicked by new in silico methods. Our emphasis here is on toxicology and on predicting properties through chemical structures. Two main streams of such models are available: models that consider the whole molecular structure to predict a value, namely QSAR (Quantitative Structure Activity Relationships), and models that find relevant substructures to predict a class, namely SAR. The term in silico discovery is applied to chemical design, to computational toxicology, and to drug discovery. We discuss how the experimental practice in biological science is moving more and more toward modeling and simulation. Such virtual experiments confirm hypotheses, provide data for regulation, and help in designing new chemicals.
Non-Coalescence Effects in Microgravity
NASA Technical Reports Server (NTRS)
Neitzel, G. Paul
1998-01-01
Non-coalescence of two bodies of the same liquid and the suppression of contact between liquid drops and solid surfaces is being studied through a pair of parallel investigations being conducted at the Georgia Institute of Technology and the Microgravity Research and Support (MARS) Center in Naples, Italy. Both non-coalescence and contact suppression are achieved by exploiting the mechanism of thermocapillary convection to drive a lubricating film of surrounding gas (air) into the space between the two liquid free surfaces (non-coalescence) or between the drop free surface and the solid (contact suppression). Earlier experiments performed included flow-visualization experiments in both axisymmetric and (nearly) two-dimensional geometries and quantitative measurements of film thickness in the contact-suppression case in both geometries. Work done in the second year has focused on obtaining quantitative results relating to the effects of variable air pressure, development of analytical and numerical models of non-coalescing droplets and to pursuing potential applications of these self-lubricated systems.
Teaching optical phenomena with Tracker
NASA Astrophysics Data System (ADS)
Rodrigues, M.; Simeão Carvalho, P.
2014-11-01
Since the invention and dissemination of domestic laser pointers, observing optical phenomena is a relatively easy task. Any student can buy a laser and experience at home, in a qualitative way, the reflection, refraction and even diffraction phenomena of light. However, quantitative experiments need instruments of high precision that have a relatively complex setup. Fortunately, nowadays it is possible to analyse optical phenomena in a simple and quantitative way using the freeware video analysis software ‘Tracker’. In this paper, we show the advantages of video-based experimental activities for teaching concepts in optics. We intend to show: (a) how easy the study of such phenomena can be, even at home, because only simple materials are needed, and Tracker provides the necessary measuring instruments; and (b) how we can use Tracker to improve students’ understanding of some optical concepts. We give examples using video modelling to study the laws of reflection, Snell’s laws, focal distances in lenses and mirrors, and diffraction phenomena, which we hope will motivate teachers to implement it in their own classes and schools.
Qualitative, semi-quantitative, and quantitative simulation of the osmoregulation system in yeast.
Pang, Wei; Coghill, George M
2015-05-01
In this paper we demonstrate how Morven, a computational framework which can perform qualitative, semi-quantitative, and quantitative simulation of dynamical systems using the same model formalism, is applied to study the osmotic stress response pathway in yeast. First the Morven framework itself is briefly introduced in terms of the model formalism employed and output format. We then built a qualitative model for the biophysical process of the osmoregulation in yeast, and a global qualitative-level picture was obtained through qualitative simulation of this model. Furthermore, we constructed a Morven model based on existing quantitative model of the osmoregulation system. This model was then simulated qualitatively, semi-quantitatively, and quantitatively. The obtained simulation results are presented with an analysis. Finally the future development of the Morven framework for modelling the dynamic biological systems is discussed. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Influence of rumen protozoa on methane emission in ruminants: a meta-analysis approach.
Guyader, J; Eugène, M; Nozière, P; Morgavi, D P; Doreau, M; Martin, C
2014-11-01
A meta-analysis was conducted to evaluate the effects of protozoa concentration on methane emission from ruminants. A database was built from 59 publications reporting data from 76 in vivo experiments. The experiments included in the database recorded methane production and rumen protozoa concentration measured on the same groups of animals. Quantitative data such as diet chemical composition, rumen fermentation and microbial parameters, and qualitative information such as methane mitigation strategies were also collected. In the database, 31% of the experiments reported a concomitant reduction of both protozoa concentration and methane emission (g/kg dry matter intake). Nearly all of these experiments tested lipids as methane mitigation strategies. By contrast, 21% of the experiments reported a variation in methane emission without changes in protozoa numbers, indicating that methanogenesis is also regulated by other mechanisms not involving protozoa. Experiments that used chemical compounds as an antimethanogenic treatment belonged to this group. The relationship between methane emission and protozoa concentration was studied with a variance-covariance model, with experiment as a fixed effect. The experiments included in the analysis had a within-experiment variation of protozoa concentration higher than 5.3 log10 cells/ml corresponding to the average s.e.m. of the database for this variable. To detect potential interfering factors for the relationship, the influence of several qualitative and quantitative secondary factors was tested. This meta-analysis showed a significant linear relationship between methane emission and protozoa concentration: methane (g/kg dry matter intake)=-30.7+8.14×protozoa (log10 cells/ml) with 28 experiments (91 treatments), residual mean square error=1.94 and adjusted R 2=0.90. The proportion of butyrate in the rumen positively influenced the least square means of this relationship.
A permeation theory for single-file ion channels: one- and two-step models.
Nelson, Peter Hugo
2011-04-28
How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no quantitative comparison has yet been made. The A/D model makes a network of predictions for how the elementary steps and the channel occupancy vary with both concentration and voltage. In addition, the proposed theoretical framework suggests a new way of plotting the energetics of the simulated system using a one-dimensional permeation coordinate that uses electric potential energy as a metric for the net fractional progress through the permeation mechanism. This approach has the potential to provide a quantitative connection between atomistic simulations and permeation experiments for the first time.
Adaptation of Timing Behavior to a Regular Change in Criterion
Sanabria, Federico; Oldenburg, Liliana
2013-01-01
This study examined how operant behavior adapted to an abrupt but regular change in the timing of reinforcement. Pigeons were trained on a fixed interval (FI) 15-s schedule of reinforcement during half of each experimental session, and on an FI 45-s (Experiment 1), FI 60-s (Experiment 2), or extinction schedule (Experiment 3) during the other half. FI performance was well characterized by a mixture of two gamma-shaped distributions of responses. When a longer FI schedule was in effect in the first half of the session (Experiment 1), a constant interference by the shorter FI was observed. When a shorter FI schedule was in effect in the first half of the session (Experiments 1, 2, and 3), the transition between schedules involved a decline in responding and a progressive rightward shift in the mode of the response distribution initially centered around the short FI. These findings are discussed in terms of the constraints they impose to quantitative models of timing, and in relation to the implications for information-based models of associative learning. PMID:23962672
Schilling, Birgit; Rardin, Matthew J; MacLean, Brendan X; Zawadzka, Anna M; Frewen, Barbara E; Cusack, Michael P; Sorensen, Dylan J; Bereman, Michael S; Jing, Enxuan; Wu, Christine C; Verdin, Eric; Kahn, C Ronald; Maccoss, Michael J; Gibson, Bradford W
2012-05-01
Despite advances in metabolic and postmetabolic labeling methods for quantitative proteomics, there remains a need for improved label-free approaches. This need is particularly pressing for workflows that incorporate affinity enrichment at the peptide level, where isobaric chemical labels such as isobaric tags for relative and absolute quantitation and tandem mass tags may prove problematic or where stable isotope labeling with amino acids in cell culture labeling cannot be readily applied. Skyline is a freely available, open source software tool for quantitative data processing and proteomic analysis. We expanded the capabilities of Skyline to process ion intensity chromatograms of peptide analytes from full scan mass spectral data (MS1) acquired during HPLC MS/MS proteomic experiments. Moreover, unlike existing programs, Skyline MS1 filtering can be used with mass spectrometers from four major vendors, which allows results to be compared directly across laboratories. The new quantitative and graphical tools now available in Skyline specifically support interrogation of multiple acquisitions for MS1 filtering, including visual inspection of peak picking and both automated and manual integration, key features often lacking in existing software. In addition, Skyline MS1 filtering displays retention time indicators from underlying MS/MS data contained within the spectral library to ensure proper peak selection. The modular structure of Skyline also provides well defined, customizable data reports and thus allows users to directly connect to existing statistical programs for post hoc data analysis. To demonstrate the utility of the MS1 filtering approach, we have carried out experiments on several MS platforms and have specifically examined the performance of this method to quantify two important post-translational modifications: acetylation and phosphorylation, in peptide-centric affinity workflows of increasing complexity using mouse and human models.
Schilling, Birgit; Rardin, Matthew J.; MacLean, Brendan X.; Zawadzka, Anna M.; Frewen, Barbara E.; Cusack, Michael P.; Sorensen, Dylan J.; Bereman, Michael S.; Jing, Enxuan; Wu, Christine C.; Verdin, Eric; Kahn, C. Ronald; MacCoss, Michael J.; Gibson, Bradford W.
2012-01-01
Despite advances in metabolic and postmetabolic labeling methods for quantitative proteomics, there remains a need for improved label-free approaches. This need is particularly pressing for workflows that incorporate affinity enrichment at the peptide level, where isobaric chemical labels such as isobaric tags for relative and absolute quantitation and tandem mass tags may prove problematic or where stable isotope labeling with amino acids in cell culture labeling cannot be readily applied. Skyline is a freely available, open source software tool for quantitative data processing and proteomic analysis. We expanded the capabilities of Skyline to process ion intensity chromatograms of peptide analytes from full scan mass spectral data (MS1) acquired during HPLC MS/MS proteomic experiments. Moreover, unlike existing programs, Skyline MS1 filtering can be used with mass spectrometers from four major vendors, which allows results to be compared directly across laboratories. The new quantitative and graphical tools now available in Skyline specifically support interrogation of multiple acquisitions for MS1 filtering, including visual inspection of peak picking and both automated and manual integration, key features often lacking in existing software. In addition, Skyline MS1 filtering displays retention time indicators from underlying MS/MS data contained within the spectral library to ensure proper peak selection. The modular structure of Skyline also provides well defined, customizable data reports and thus allows users to directly connect to existing statistical programs for post hoc data analysis. To demonstrate the utility of the MS1 filtering approach, we have carried out experiments on several MS platforms and have specifically examined the performance of this method to quantify two important post-translational modifications: acetylation and phosphorylation, in peptide-centric affinity workflows of increasing complexity using mouse and human models. PMID:22454539
Manson, Steven M.; Evans, Tom
2007-01-01
We combine mixed-methods research with integrated agent-based modeling to understand land change and economic decision making in the United States and Mexico. This work demonstrates how sustainability science benefits from combining integrated agent-based modeling (which blends methods from the social, ecological, and information sciences) and mixed-methods research (which interleaves multiple approaches ranging from qualitative field research to quantitative laboratory experiments and interpretation of remotely sensed imagery). We test assumptions of utility-maximizing behavior in household-level landscape management in south-central Indiana, linking parcel data, land cover derived from aerial photography, and findings from laboratory experiments. We examine the role of uncertainty and limited information, preferences, differential demographic attributes, and past experience and future time horizons. We also use evolutionary programming to represent bounded rationality in agriculturalist households in the southern Yucatán of Mexico. This approach captures realistic rule of thumb strategies while identifying social and environmental factors in a manner similar to econometric models. These case studies highlight the role of computational models of decision making in land-change contexts and advance our understanding of decision making in general. PMID:18093928
Triggering up states in all-to-all coupled neurons
NASA Astrophysics Data System (ADS)
Ngo, H.-V. V.; Köhler, J.; Mayer, J.; Claussen, J. C.; Schuster, H. G.
2010-03-01
Slow-wave sleep in mammalians is characterized by a change of large-scale cortical activity currently paraphrased as cortical Up/Down states. A recent experiment demonstrated a bistable collective behaviour in ferret slices, with the remarkable property that the Up states can be switched on and off with pulses, or excitations, of same polarity; whereby the effect of the second pulse significantly depends on the time interval between the pulses. Here we present a simple time-discrete model of a neural network that exhibits this type of behaviour, as well as quantitatively reproduces the time dependence found in the experiments.
Mechanism of hologram formation in fixation-free rehalogenating bleaching processes.
Neipp, Cristian; Pascual, Inmaculada; Beléndez, Augusto
2002-07-10
The mechanism of hologram formation in fixation-free rehalogenating bleaching processes have been treated by different authors. The experiments carried out on Agfa 8E75 HD plates led to the conclusion that material transfer from the exposed to the unexposed zones is the main mechanism under theprocess. We present a simple model that explains the mechanism of hologram formation inside the emulsion. Also quantitative data obtained using both Agfa 8E75 HD and Slavich PFG-01 fine-grained red-sensitive emulsions are given and good agreement between theory and experiments are found.
Meinherz, Franziska; Videira, Nuno
2018-04-10
The aim of this paper is to contribute to the exploration of environmental modeling methods based on the elicitation of stakeholders' mental models. This aim is motivated by the necessity to understand the dilemmas and behavioral rationales of individuals for supporting the management of environmental problems. The methodology developed for this paper integrates qualitative and quantitative methods by deploying focus groups for the elicitation of the behavioral rationales of the target population, and grounded theory to code the information gained in the focus groups and to guide the development of a dynamic simulation model. The approach is applied to a case of urban air pollution caused by residential heating with wood in central Chile. The results show how the households' behavior interrelates with the governmental management strategies and provide valuable and novel insights into potential challenges to the implementation of policies to manage the local air pollution problem. The experience further shows that the developed participatory modeling approach allows to overcome some of the issues currently encountered in the elicitation of individuals' behavioral rationales and in the quantification of qualitative information.
Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
Ocean regional circulation model sensitizes to resolution of the lateral boundary conditions
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan
2017-04-01
Dynamical downscaling with nested regional oceanographic models is an effective approach for forecasting operationally coastal weather and projecting long term climate on the ocean. Nesting procedures deliver the unwanted in dynamic downscaling due to the differences of numerical grid sizes and updating steps. Therefore, such unavoidable errors restrict the application of the Ocean Regional Circulation Model (ORCMs) in both short-term forecasts and long-term projections. The current work identifies the effects of errors induced by computational limitations during nesting procedures on the downscaled results of the ORCMs. The errors are quantitatively evaluated for each error source and its characteristics by the Big-Brother Experiments (BBE). The BBE separates identified errors from each other and quantitatively assess the amount of uncertainties employing the same model to simulate for both nesting and nested model. Here, we focus on discussing errors resulting from two main matters associated with nesting procedures. They should be the spatial grids' differences and the temporal updating steps. After the diverse cases from separately running of the BBE, a Taylor diagram was adopted to analyze the results and suggest an optimization intern of grid size and updating period and domain sizes. Key words: lateral boundary condition, error, ocean regional circulation model, Big-Brother Experiment. Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled "Development of integrated estuarine management system" and a National Research Foundation of Korea (NRF) Grant (No. 2015R1A5A 7037372) funded by MSIP of Korea. The authors thank the Integrated Research Institute of Construction and Environmental Engineering of Seoul National University for administrative support.
NASA Astrophysics Data System (ADS)
Zhang, Zesheng; Zhang, Lili; Jasa, John; Li, Wenlong; Gazonas, George; Negahban, Mehrdad
2017-07-01
A representative all-atom molecular dynamics (MD) system of polycarbonate (PC) is built and conditioned to capture and predict the behaviours of PC in response to a broad range of thermo-mechanical loadings for various thermal aging. The PC system is constructed to have a distribution of molecular weights comparable to a widely used commercial PC (LEXAN 9034), and thermally conditioned to produce models for aged and unaged PC. The MD responses of these models are evaluated through comparisons to existing experimental results carried out at much lower loading rates, but done over a broad range of temperatures and loading modes. These experiments include monotonic extension/compression/shear, unilaterally and bilaterally confined compression, and load-reversal during shear. It is shown that the MD simulations show both qualitative and quantitative similarity with the experimental response. The quantitative similarity is evaluated by comparing the dilatational response under bilaterally confined compression, the shear flow viscosity and the equivalent yield stress. The consistency of the in silico response to real laboratory experiments strongly suggests that the current PC models are physically and mechanically relevant and potentially can be used to investigate thermo-mechanical response to loading conditions that would not easily be possible. These MD models may provide valuable insight into the molecular sources of certain observations, and could possibly offer new perspectives on how to develop constitutive models that are based on better understanding the response of PC under complex loadings. To this latter end, the models are used to predict the response of PC to complex loading modes that would normally be difficult to do or that include characteristics that would be difficult to measure. These include the responses of unaged and aged PC to unilaterally confined extension/compression, cyclic uniaxial/shear loadings, and saw-tooth extension/compression/shear.
Wilkinson, Irene J; Pisaniello, Dino; Ahmad, Junaid; Edwards, Suzanne
2010-09-01
To present the evaluation of a large-scale quantitative respirator-fit testing program. Concurrent questionnaire survey of fit testers and test subjects. Ambulatory care, home nursing care, and acute care hospitals across South Australia. Quantitative facial-fit testing was performed with TSI PortaCount instruments for healthcare workers (HCWs) who wore 5 different models of a disposable P2 (N95-equivalent) respirator. The questionnaire included questions about the HCW's age, sex, race, occupational category, main area of work, smoking status, facial characteristics, prior training and experience in use of respiratory masks, and number of attempts to obtain a respirator fit. A total of 6,160 HCWs were successfully fitted during the period from January through July 2007. Of the 4,472 HCWs who responded to the questionnaire and were successfully fitted, 3,707 (82.9%) were successfully fitted with the first tested respirator, 551 (12.3%) required testing with a second model, and 214 (4.8%) required 3 or more tests. We noted an increased pass rate on the first attempt over time. Asians (excluding those from South and Central Asia) had the highest failure rate (16.3% [45 of 276 Asian HCWs were unsuccessfully fitted]), and whites had the lowest (9.8% [426 of 4,338 white HCWs]). Race was highly correlated with facial shape. Among occupational groups, doctors had the highest failure rate (13.4% [81 of 604 doctors]), but they also had the highest proportion of Asians. Prior education and/or training in respirator use were not associated with a higher pass rate. Certain facial characteristics were associated with higher or lower pass rates with regard to fit testing, and fit testers were able to select a suitable respirator on the basis of a visual assessment in the majority of cases. For the fit tester, training and experience were important factors; however, for the HCW being fitted, prior experience in respirator use was not an important factor.
Characteristics of the Nordic Seas overflows in a set of Norwegian Earth System Model experiments
NASA Astrophysics Data System (ADS)
Guo, Chuncheng; Ilicak, Mehmet; Bentsen, Mats; Fer, Ilker
2016-08-01
Global ocean models with an isopycnic vertical coordinate are advantageous in representing overflows, as they do not suffer from topography-induced spurious numerical mixing commonly seen in geopotential coordinate models. In this paper, we present a quantitative diagnosis of the Nordic Seas overflows in four configurations of the Norwegian Earth System Model (NorESM) family that features an isopycnic ocean model. For intercomparison, two coupled ocean-sea ice and two fully coupled (atmosphere-land-ocean-sea ice) experiments are considered. Each pair consists of a (non-eddying) 1° and a (eddy-permitting) 1/4° horizontal resolution ocean model. In all experiments, overflow waters remain dense and descend to the deep basins, entraining ambient water en route. Results from the 1/4° pair show similar behavior in the overflows, whereas the 1° pair show distinct differences, including temperature/salinity properties, volume transport (Q), and large scale features such as the strength of the Atlantic Meridional Overturning Circulation (AMOC). The volume transport of the overflows and degree of entrainment are underestimated in the 1° experiments, whereas in the 1/4° experiments, there is a two-fold downstream increase in Q, which matches observations well. In contrast to the 1/4° experiments, the coarse 1° experiments do not capture the inclined isopycnals of the overflows or the western boundary current off the Flemish Cap. In all experiments, the pathway of the Iceland-Scotland Overflow Water is misrepresented: a major fraction of the overflow proceeds southward into the West European Basin, instead of turning westward into the Irminger Sea. This discrepancy is attributed to excessive production of Labrador Sea Water in the model. The mean state and variability of the Nordic Seas overflows have significant consequences on the response of the AMOC, hence their correct representations are of vital importance in global ocean and climate modelling.
Homogeneous and heterogeneous chemistry along air parcel trajectories
NASA Technical Reports Server (NTRS)
Jones, R. L.; Mckenna, D. L.; Poole, L. R.; Solomon, S.
1990-01-01
The study of coupled heterogeneous and homogeneous chemistry due to polar stratospheric clouds (PSC's) using Lagrangian parcel trajectories for interpretation of the Airborne Arctic Stratosphere Experiment (AASE) is discussed. This approach represents an attempt to quantitatively model the physical and chemical perturbation to stratospheric composition due to formation of PSC's using the fullest possible representation of the relevant processes. Further, the meteorological fields from the United Kingdom Meteorological office global model were used to deduce potential vorticity and inferred regions of PSC's as an input to flight planning during AASE.
Old and new results about single-photon sensitivity in human vision
NASA Astrophysics Data System (ADS)
Nelson, Philip C.
2016-04-01
It is sometimes said that ‘our eyes can see single photons’. This article begins by finding a more precise version of that claim and reviewing evidence gathered for it up to around 1985 in two distinct realms, those of human psychophysics and single-cell physiology. Finding a single framework that accommodates both kinds of result is then a nontrivial challenge, and one that sets severe quantitative constraints on any model of dim-light visual processing. This article presents one such model and compares it to a recent experiment.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Bokhart, Mark T.; Rosen, Elias; Thompson, Corbin; Sykes, Craig; Kashuba, Angela D. M.; Muddiman, David C.
2015-01-01
A quantitative mass spectrometry imaging (QMSI) technique using infrared matrix-assisted laser desorption electrospray ionization (IR-MALDESI) is demonstrated for the antiretroviral (ARV) drug emtricitabine in incubated human cervical tissue. Method development of the QMSI technique leads to a gain in sensitivity and removal of interferences for several ARV drugs. Analyte response was significantly improved by a detailed evaluation of several cationization agents. Increased sensitivity and removal of an isobaric interference was demonstrated with sodium chloride in the electrospray solvent. Voxel-to-voxel variability was improved for the MSI experiments by normalizing analyte abundance to a uniformly applied compound with similar characteristics to the drug of interest. Finally, emtricitabine was quantified in tissue with a calibration curve generated from the stable isotope-labeled analog of emtricitabine followed by cross-validation using liquid chromatography tandem mass spectrometry (LC-MS/MS). The quantitative IR-MALDESI analysis proved to be reproducible with an emtricitabine concentration of 17.2±1.8 μg/gtissue. This amount corresponds to the detection of 7 fmol/voxel in the IR-MALDESI QMSI experiment. Adjacent tissue slices were analyzed using LC-MS/MS which resulted in an emtricitabine concentration of 28.4±2.8 μg/gtissue. PMID:25318460
Toward a quantitative approach to migrants integration
NASA Astrophysics Data System (ADS)
Barra, A.; Contucci, P.
2010-03-01
Migration phenomena and all the related issues, like integration of different social groups, are intrinsically complex problems since they strongly depend on several competitive mechanisms as economic factors, cultural differences and many others. By identifying a few essential assumptions, and using the statistical mechanics of complex systems, we propose a novel quantitative approach that provides a minimal theory for those phenomena. We show that the competitive interactions in decision making between a population of N host citizens and P immigrants, a bi-partite spin-glass, give rise to a social consciousness inside the host community in the sense of the associative memory of neural networks. The theory leads to a natural quantitative definition of migrant's "integration" inside the community. From the technical point of view this minimal picture assumes, as control parameters, only general notions like the strength of the random interactions, the ratio between the sizes of the two parties and the cultural influence. Few steps forward, toward more refined models, which include a digression on the kind of the felt experiences and some structure on the random interaction topology (as dilution to avoid the plain mean-field approach) and correlations of experiences felt between the two parties (biasing the distribution of the coupling) are discussed at the end, where we show the robustness of our approach.
Role of intestinal flora imbalance in pathogenesis of pouchitis.
Feng, Xiao-Bo; Jiang, Jun; Li, Min; Wang, Gang; You, Jin-Wei; Zuo, Jian
2016-08-01
To discuss the role of intestinal flora imbalance in the pathogenesis of pouchitis. The pouchitis rat model was established and the faeces sample and the mucous membrane sample were collected regularly, in which the bacterial nucleic acids were extracted for quantitative analysis of the intestinal flora in the samples through using the real-time quantitative PCR technique and high energy sequencing technology. The disorder phenomenon of the intestinal flora appeared at the 7th day of the experiment, and the pouchitis was presented at the 21st day of the experiment. At the 31st day of the experiment, compared to control group and non-pouchitis group, the quantity of Bifidobacterium and the Lactobacillus of the pouchitis model rats in the mucous membrane sample and the faeces sample were significantly decreased (P < 0.05), and the Bacteroidetes, Faecalibacterium prausnitzii and XIV Clostridium leptum subgroup in the mucous membrane of pouchitis were significantly decreased (P < 0.05). The IV Clostridium coccoides group was the main flora in the mucous membrane of pouchitis, the bacterial diversity of non-pouchitis group and control group was significantly higher than that of the pouchitis group (P < 0.05). The intestinal flora imbalance is one of the factors that cause the incidence of the pouchitis; this study provides a clue of the pathogenesis and treatment direction of the intestinal inflammatory disease. Copyright © 2016 Hainan Medical College. Production and hosting by Elsevier B.V. All rights reserved.
Sunderland, John J; Christian, Paul E
2015-01-01
The Clinical Trials Network (CTN) of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) operates a PET/CT phantom imaging program using the CTN's oncology clinical simulator phantom, designed to validate scanners at sites that wish to participate in oncology clinical trials. Since its inception in 2008, the CTN has collected 406 well-characterized phantom datasets from 237 scanners at 170 imaging sites covering the spectrum of commercially available PET/CT systems. The combined and collated phantom data describe a global profile of quantitative performance and variability of PET/CT data used in both clinical practice and clinical trials. Individual sites filled and imaged the CTN oncology PET phantom according to detailed instructions. Standard clinical reconstructions were requested and submitted. The phantom itself contains uniform regions suitable for scanner calibration assessment, lung fields, and 6 hot spheric lesions with diameters ranging from 7 to 20 mm at a 4:1 contrast ratio with primary background. The CTN Phantom Imaging Core evaluated the quality of the phantom fill and imaging and measured background standardized uptake values to assess scanner calibration and maximum standardized uptake values of all 6 lesions to review quantitative performance. Scanner make-and-model-specific measurements were pooled and then subdivided by reconstruction to create scanner-specific quantitative profiles. Different makes and models of scanners predictably demonstrated different quantitative performance profiles including, in some cases, small calibration bias. Differences in site-specific reconstruction parameters increased the quantitative variability among similar scanners, with postreconstruction smoothing filters being the most influential parameter. Quantitative assessment of this intrascanner variability over this large collection of phantom data gives, for the first time, estimates of reconstruction variance introduced into trials from allowing trial sites to use their preferred reconstruction methodologies. Predictably, time-of-flight-enabled scanners exhibited less size-based partial-volume bias than non-time-of-flight scanners. The CTN scanner validation experience over the past 5 y has generated a rich, well-curated phantom dataset from which PET/CT make-and-model and reconstruction-dependent quantitative behaviors were characterized for the purposes of understanding and estimating scanner-based variances in clinical trials. These results should make it possible to identify and recommend make-and-model-specific reconstruction strategies to minimize measurement variability in cancer clinical trials. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
An Overview of Quantitative Risk Assessment of Space Shuttle Propulsion Elements
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
1998-01-01
Since the Space Shuttle Challenger accident in 1986, NASA has been working to incorporate quantitative risk assessment (QRA) in decisions concerning the Space Shuttle and other NASA projects. One current major NASA QRA study is the creation of a risk model for the overall Space Shuttle system. The model is intended to provide a tool to estimate Space Shuttle risk and to perform sensitivity analyses/trade studies, including the evaluation of upgrades. Marshall Space Flight Center (MSFC) is a part of the NASA team conducting the QRA study; MSFC responsibility involves modeling the propulsion elements of the Space Shuttle, namely: the External Tank (ET), the Solid Rocket Booster (SRB), the Reusable Solid Rocket Motor (RSRM), and the Space Shuttle Main Engine (SSME). This paper discusses the approach that MSFC has used to model its Space Shuttle elements, including insights obtained from this experience in modeling large scale, highly complex systems with a varying availability of success/failure data. Insights, which are applicable to any QRA study, pertain to organizing the modeling effort, obtaining customer buy-in, preparing documentation, and using varied modeling methods and data sources. Also provided is an overall evaluation of the study results, including the strengths and the limitations of the MSFC QRA approach and of qRA technology in general.
Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio
2002-05-01
This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.
Rapid Configurational Fluctuations in a Model of Methylcellulose
NASA Astrophysics Data System (ADS)
Li, Xiaolan; Dorfman, Kevin
Methylcellulose is a thermoresponsive polymer that undergoes a phase transition at elevated temperature, forming fibrils of a uniform diameter. However, the gelation mechanism is still unclear, in particular at higher polymer concentrations. We have investigated a coarse-grained model for methylcellulose, proposed by Larson and coworkers, that produces collapsed toroids in dilute solution with a radius close to that in experiments. Using Brownian Dynamics simulations, we demonstrate that this model's dihedral potential generates ``flipping events'', which helps the chain to avoid kinetic traps by undergoing a sudden transition between a coiled and a collapsed state. If the dihedral potential is removed, the chains cannot escape from their collapsed configuration, whereas at high dihedral potentials, the chains cannot stabilize the collapsed state. We will present quantitative results on the effect of the dihedral potential on both chain statistics and dynamic behavior, and discuss the implication of our results on the spontaneous formation of high-aspect ratio fibrils in experiments.
Heinmets, F; Leary, R H
1991-06-01
A model system (1) was established to analyze purine and pyrimidine metabolism. This system has been expanded to include macrosimulation of DNA synthesis and the study of its regulation by terminal deoxynucleoside triphosphates (dNTPs) via a complex set of interactions. Computer experiments reveal that our model exhibits adequate and reasonable sensitivity in terms of dNTP pool levels and rates of DNA synthesis when inputs to the system are varied. These simulation experiments reveal that in order to achieve maximum DNA synthesis (in terms of purine metabolism), a proper balance is required in guanine and adenine input into this metabolic system. Excessive inputs will become inhibitory to DNA synthesis. In addition, studies are carried out on rates of DNA synthesis when various parameters are changed quantitatively. The current system is formulated by 110 differential equations.
Optimal Design of Experiments by Combining Coarse and Fine Measurements
NASA Astrophysics Data System (ADS)
Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.
2017-11-01
In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.
Gestalt isomorphism and the primacy of subjective conscious experience: a Gestalt Bubble model.
Lehar, Steven
2003-08-01
A serious crisis is identified in theories of neurocomputation, marked by a persistent disparity between the phenomenological or experiential account of visual perception and the neurophysiological level of description of the visual system. In particular, conventional concepts of neural processing offer no explanation for the holistic global aspects of perception identified by Gestalt theory. The problem is paradigmatic and can be traced to contemporary concepts of the functional role of the neural cell, known as the Neuron Doctrine. In the absence of an alternative neurophysiologically plausible model, I propose a perceptual modeling approach, to model the percept as experienced subjectively, rather than modeling the objective neurophysiological state of the visual system that supposedly subserves that experience. A Gestalt Bubble model is presented to demonstrate how the elusive Gestalt principles of emergence, reification, and invariance can be expressed in a quantitative model of the subjective experience of visual consciousness. That model in turn reveals a unique computational strategy underlying visual processing, which is unlike any algorithm devised by man, and certainly unlike the atomistic feed-forward model of neurocomputation offered by the Neuron Doctrine paradigm. The perceptual modeling approach reveals the primary function of perception as that of generating a fully spatial virtual-reality replica of the external world in an internal representation. The common objections to this "picture-in-the-head" concept of perceptual representation are shown to be ill founded.
Kann, Z R; Skinner, J L
2014-09-14
Non-polarizable models for ions and water quantitatively and qualitatively misrepresent the salt concentration dependence of water diffusion in electrolyte solutions. In particular, experiment shows that the water diffusion coefficient increases in the presence of salts of low charge density (e.g., CsI), whereas the results of simulations with non-polarizable models show a decrease of the water diffusion coefficient in all alkali halide solutions. We present a simple charge-scaling method based on the ratio of the solvent dielectric constants from simulation and experiment. Using an ion model that was developed independently of a solvent, i.e., in the crystalline solid, this method improves the water diffusion trends across a range of water models. When used with a good-quality water model, e.g., TIP4P/2005 or E3B, this method recovers the qualitative behaviour of the water diffusion trends. The model and method used were also shown to give good results for other structural and dynamic properties including solution density, radial distribution functions, and ion diffusion coefficients.
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
Modelling of resonant MEMS magnetic field sensor with electromagnetic induction sensing
NASA Astrophysics Data System (ADS)
Liu, Song; Xu, Huaying; Xu, Dehui; Xiong, Bin
2017-06-01
This paper presents an analytical model of resonant MEMS magnetic field sensor with electromagnetic induction sensing. The resonant structure vibrates in square extensional (SE) mode. By analyzing the vibration amplitude and quality factor of the resonant structure, the magnetic field sensitivity as a function of device structure parameters and encapsulation pressure is established. The developed analytical model has been verified by comparing calculated results with experiment results and the deviation between them is only 10.25%, which shows the feasibility of the proposed device model. The model can provide theoretical guidance for further design optimization of the sensor. Moreover, a quantitative study of the magnetic field sensitivity is conducted with respect to the structure parameters and encapsulation pressure based on the proposed model.
Vass, Caroline M; Payne, Katherine
2017-09-01
There is emerging interest in the use of discrete choice experiments as a means of quantifying the perceived balance between benefits and risks (quantitative benefit-risk assessment) of new healthcare interventions, such as medicines, under assessment by regulatory agencies. For stated preference data on benefit-risk assessment to be used in regulatory decision making, the methods to generate these data must be valid, reliable and capable of producing meaningful estimates understood by decision makers. Some reporting guidelines exist for discrete choice experiments, and for related methods such as conjoint analysis. However, existing guidelines focus on reporting standards, are general in focus and do not consider the requirements for using discrete choice experiments specifically for quantifying benefit-risk assessments in the context of regulatory decision making. This opinion piece outlines the current state of play in using discrete choice experiments for benefit-risk assessment and proposes key areas needing to be addressed to demonstrate that discrete choice experiments are an appropriate and valid stated preference elicitation method in this context. Methodological research is required to establish: how robust the results of discrete choice experiments are to formats and methods of risk communication; how information in the discrete choice experiment can be presented effectually to respondents; whose preferences should be elicited; the correct underlying utility function and analytical model; the impact of heterogeneity in preferences; and the generalisability of the results. We believe these methodological issues should be addressed, alongside developing a 'reference case', before agencies can safely and confidently use discrete choice experiments for quantitative benefit-risk assessment in the context of regulatory decision making for new medicines and healthcare products.
Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS
Barani, T.; Bruschi, E.; Pizzocri, D.; ...
2017-01-03
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less
Cognitive niches: an ecological model of strategy selection.
Marewski, Julian N; Schooler, Lael J
2011-07-01
How do people select among different strategies to accomplish a given task? Across disciplines, the strategy selection problem represents a major challenge. We propose a quantitative model that predicts how selection emerges through the interplay among strategies, cognitive capacities, and the environment. This interplay carves out for each strategy a cognitive niche, that is, a limited number of situations in which the strategy can be applied, simplifying strategy selection. To illustrate our proposal, we consider selection in the context of 2 theories: the simple heuristics framework and the ACT-R (adaptive control of thought-rational) architecture of cognition. From the heuristics framework, we adopt the thesis that people make decisions by selecting from a repertoire of simple decision strategies that exploit regularities in the environment and draw on cognitive capacities, such as memory and time perception. ACT-R provides a quantitative theory of how these capacities adapt to the environment. In 14 simulations and 10 experiments, we consider the choice between strategies that operate on the accessibility of memories and those that depend on elaborate knowledge about the world. Based on Internet statistics, our model quantitatively predicts people's familiarity with and knowledge of real-world objects, the distributional characteristics of the associated speed of memory retrieval, and the cognitive niches of classic decision strategies, including those of the fluency, recognition, integration, lexicographic, and sequential-sampling heuristics. In doing so, the model specifies when people will be able to apply different strategies and how accurate, fast, and effortless people's decisions will be.
Confessions of a Quantitative Educational Researcher Trying to Teach Qualitative Research.
ERIC Educational Resources Information Center
Stallings, William M.
1995-01-01
Describes one quantitative educational researcher's experiences teaching qualitative research, the approach used in classes, and the successes and failures. These experiences are examined from the viewpoint of a traditionally trained professor who has now been called upon to master and teach qualitative research. (GR)
Quantitative Imaging in Laboratory: Fast Kinetics and Fluorescence Quenching
ERIC Educational Resources Information Center
Cumberbatch, Tanya; Hanley, Quentin S.
2007-01-01
The process of quantitative imaging, which is very commonly used in laboratory, is shown to be very useful for studying the fast kinetics and fluorescence quenching of many experiments. The imaging technique is extremely cheap and hence can be used in many absorption and luminescence experiments.
Perceptions of Mentoring: Examining the Experiences of Women Superintendents
ERIC Educational Resources Information Center
Copeland, Scarlett M.; Calhoun, Daniel W.
2014-01-01
This descriptive mixed methods study gathered both quantitative and qualitative data on the mentoring experiences of women superintendents in a Southeastern state. The quantitative participants included 39 women superintendents from this state and the qualitative portion of the study was comprised of eight female superintendents purposefully…
Process modeling KC-135 aircraft
NASA Technical Reports Server (NTRS)
Workman, Gary L.
1991-01-01
Instrumentation will be provided for KC-135 aircraft which will provide a quantitative measure of g-level variation during parabolic flights and its effect on experiments which demonstrate differences in results obtained with differences in convective flow. The flight apparatus will provide video recording of the effects of the g-level variations on varying fluid samples. The apparatus will be constructed to be available to fly on the KC-135 during most missions.
Development of Landscape Metrics to Support Process-Driven Ecological Modeling
2014-04-01
channel experiences shoaling due to strong tidal currents transporting sediments and has a symmetrical north-south, tide-dominant ebb delta. A 350...quantitative relationships can be established between landscape pattern formation and environmental or geomorphic processes, then those relationships could...should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a collection of
The Role of Excitons on Light Amplification in Lead Halide Perovskites.
Lü, Quan; Wei, Haohan; Sun, Wenzhao; Wang, Kaiyang; Gu, Zhiyuan; Li, Jiankai; Liu, Shuai; Xiao, Shumin; Song, Qinghai
2016-12-01
The role of excitons on the amplifications of lead halide perovskites has been explored. Unlike the photoluminescence, the intensity of amplified spontaneous emission is partially suppressed at low temperature. The detailed analysis and experiments show that the inhibition is attributed to the existence of exciton and a quantitative model has been built to explain the experimental observations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mathematical Model of HIF-1 alpha Pathway, Oxygen Transport and Hypoxia
2017-09-01
interpret experimental data in terms of underlying mechanisms. Such experiments, if quantitative , can also be used to calibrate and further parameterize...Wing Air Force Research Laboratory Wright-Patterson AFB OH 45433-5707 STINFO COPY Work Unit Manager MATTIE.DAV ID.R.123010 1880 Digitally signed by...MONITORING AGENCY NAME(S) AND ADDRESS(ES) Air Force Materiel Command* Air Force Research Laboratory 711th Human Performance Wing Human Effectiveness
Context-specific metabolic networks are consistent with experiments.
Becker, Scott A; Palsson, Bernhard O
2008-05-16
Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME) to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.
Wei, Z G; Macwan, A P; Wieringa, P A
1998-06-01
In this paper we quantitatively model degree of automation (DofA) in supervisory control as a function of the number and nature of tasks to be performed by the operator and automation. This model uses a task weighting scheme in which weighting factors are obtained from task demand load, task mental load, and task effect on system performance. The computation of DofA is demonstrated using an experimental system. Based on controlled experiments using operators, analyses of the task effect on system performance, the prediction and assessment of task demand load, and the prediction of mental load were performed. Each experiment had a different DofA. The effect of a change in DofA on system performance and mental load was investigated. It was found that system performance became less sensitive to changes in DofA at higher levels of DofA. The experimental data showed that when the operator controlled a partly automated system, perceived mental load could be predicted from the task mental load for each task component, as calculated by analyzing a situation in which all tasks were manually controlled. Actual or potential applications of this research include a methodology to balance and optimize the automation of complex industrial systems.
Scaling behavior of columnar structure during physical vapor deposition
NASA Astrophysics Data System (ADS)
Meese, W. J.; Lu, T.-M.
2018-02-01
The statistical effects of different conditions in physical vapor deposition, such as sputter deposition, have on thin film morphology has long been the subject of interest. One notable effect is that of column development due to differential chamber pressure in the well-known empirical model called the Thornton's Structure Zone Model. The model is qualitative in nature and theoretical understanding with quantitative predictions of the morphology is still lacking due, in part, to the absence of a quantitative description of the incident flux distribution on the growth front. In this work, we propose an incident Gaussian flux model developed from a series of binary hard-sphere collisions and simulate its effects using Monte Carlo methods and a solid-on-solid growth scheme. We also propose an approximate cosine-power distribution for faster Monte Carlo sampling. With this model, it is observed that higher chamber pressures widen the average deposition angle, and similarly increase the growth of column diameters (or lateral correlation length) and the column-to-column separation (film surface wavelength). We treat both the column diameter and the surface wavelength as power laws. It is seen that both the column diameter exponent and the wavelength exponent are very sensitive to changes in pressure for low pressures (0.13 Pa to 0.80 Pa); meanwhile, both exponents saturate for higher pressures (0.80 Pa to 6.7 Pa) around a value of 0.6. These predictions will serve as guides to future experiments for quantitative description of the film morphology under a wide range of vapor pressure.
NASA Astrophysics Data System (ADS)
Edmiston, John Kearney
This work explores the field of continuum plasticity from two fronts. On the theory side, we establish a complete specification of a phenomenological theory of plasticity for single crystals. The model serves as an alternative to the popular crystal plasticity formulation. Such a model has been previously proposed in the literature; the new contribution made here is the constitutive framework and resulting simulations. We calibrate the model to available data and use a simple numerical method to explore resulting predictions in plane strain boundary value problems. Results show promise for further investigation of the plasticity model. Conveniently, this theory comes with a corresponding experimental tool in X-ray diffraction. Recent advances in hardware technology at synchrotron sources have led to an increased use of the technique for studies of plasticity in the bulk of materials. The method has been successful in qualitative observations of material behavior, but its use in quantitative studies seeking to extract material properties is open for investigation. Therefore in the second component of the thesis several contributions are made to synchrotron X-ray diffraction experiments, in terms of method development as well as the quantitative reporting of constitutive parameters. In the area of method development, analytical tools are developed to determine the available precision of this type of experiment—a crucial aspect to determine if the method is to be used for quantitative studies. We also extract kinematic information relating to intragranular inhomogeneity which is not accessible with traditional methods of data analysis. In the area of constitutive parameter identification, we use the method to extract parameters corresponding to the proposed formulation of plasticity for a titanium alloy (HCP) which is continuously sampled by X-ray diffraction during uniaxial extension. These results and the lessons learned from the efforts constitute early reporting of the quantitative profitability of undertaking such a line of experimentation for the study of plastic deformation processes.
Computational model for amoeboid motion: Coupling membrane and cytosol dynamics
NASA Astrophysics Data System (ADS)
Moure, Adrian; Gomez, Hector
2016-10-01
A distinguishing feature of amoeboid motion is that the migrating cell undergoes large deformations, caused by the emergence and retraction of actin-rich protrusions, called pseudopods. Here, we propose a cell motility model that represents pseudopod dynamics, as well as its interaction with membrane signaling molecules. The model accounts for internal and external forces, such as protrusion, contraction, adhesion, surface tension, or those arising from cell-obstacle contacts. By coupling the membrane and cytosol interactions we are able to reproduce a realistic picture of amoeboid motion. The model results are in quantitative agreement with experiments and show how cells may take advantage of the geometry of their microenvironment to migrate more efficiently.
Simulated linear test applied to quantitative proteomics.
Pham, T V; Jimenez, C R
2016-09-01
Omics studies aim to find significant changes due to biological or functional perturbation. However, gene and protein expression profiling experiments contain inherent technical variation. In discovery proteomics studies where the number of samples is typically small, technical variation plays an important role because it contributes considerably to the observed variation. Previous methods place both technical and biological variations in tightly integrated mathematical models that are difficult to adapt for different technological platforms. Our aim is to derive a statistical framework that allows the inclusion of a wide range of technical variability. We introduce a new method called the simulated linear test, or the s-test, that is easy to implement and easy to adapt for different models of technical variation. It generates virtual data points from the observed values according to a pre-defined technical distribution and subsequently employs linear modeling for significance analysis. We demonstrate the flexibility of the proposed approach by deriving a new significance test for quantitative discovery proteomics for which missing values have been a major issue for traditional methods such as the t-test. We evaluate the result on two label-free (phospho) proteomics datasets based on ion-intensity quantitation. Available at http://www.oncoproteomics.nl/software/stest.html : t.pham@vumc.nl. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A quantitative study on magnesium alloy stent biodegradation.
Gao, Yuanming; Wang, Lizhen; Gu, Xuenan; Chu, Zhaowei; Guo, Meng; Fan, Yubo
2018-06-06
Insufficient scaffolding time in the process of rapid corrosion is the main problem of magnesium alloy stent (MAS). Finite element method had been used to investigate corrosion of MAS. However, related researches mostly described all elements suffered corrosion in view of one-dimensional corrosion. Multi-dimensional corrosions significantly influence mechanical integrity of MAS structures such as edges and corners. In this study, the effects of multi-dimensional corrosion were studied using experiment quantitatively, then a phenomenological corrosion model was developed to consider these effects. We implemented immersion test with magnesium alloy (AZ31B) cubes, which had different numbers of exposed surfaces to analyze differences of dimension. It was indicated that corrosion rates of cubes are almost proportional to their exposed-surface numbers, especially when pitting corrosions are not marked. The cubes also represented the hexahedron elements in simulation. In conclusion, corrosion rate of every element accelerates by increasing corrosion-surface numbers in multi-dimensional corrosion. The damage ratios among elements with the same size are proportional to the ratios of corrosion-surface numbers under uniform corrosion. The finite element simulation using proposed model provided more details of changes of morphology and mechanics in scaffolding time by removing 25.7% of elements of MAS. The proposed corrosion model reflected the effects of multi-dimension on corrosions. It would be used to predict degradation process of MAS quantitatively. Copyright © 2018 Elsevier Ltd. All rights reserved.
A model for the Pockels effect in distorted liquid crystal blue phases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castles, F., E-mail: flynn.castles@materials.ox.ac.uk
2015-09-07
Recent experiments have found that a mechanically distorted blue phase can exhibit a primary linear electro-optic (Pockels) effect [F. Castles et al., Nat. Mater. 13, 817 (2014)]. Here, it is shown that flexoelectricity can account for the experimental results and a model, which is based on continuum theory but takes into account the sub-unit-cell structure, is proposed. The model provides a quantitative description of the effect accurate to the nearest order of magnitude and predicts that the Pockels coefficient(s) in an optimally distorted blue phase may be two orders of magnitude larger than in lithium niobate.
Beam wandering of femtosecond laser filament in air.
Yang, Jing; Zeng, Tao; Lin, Lie; Liu, Weiwei
2015-10-05
The spatial wandering of a femtosecond laser filament caused by the filament heating effect in air has been studied. An empirical formula has also been derived from the classical Karman turbulence model, which determines quantitatively the displacement of the beam center as a function of the propagation distance and the effective turbulence structure constant. After fitting the experimental data with this formula, the effective turbulence structure constant has been estimated for a single filament generated in laboratory environment. With this result, one may be able to estimate quantitatively the displacement of a filament over long distance propagation and interpret the practical performance of the experiments assisted by femtosecond laser filamentation, such as remote air lasing, pulse compression, high order harmonic generation (HHG), etc.
Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion
NASA Technical Reports Server (NTRS)
Kojima, Jun J.; Fischer, David G.
2012-01-01
We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.
Nagayama, T.; Bailey, J. E.; Loisel, G.; ...
2016-02-05
Recently, frequency-resolved iron opacity measurements at electron temperatures of 170–200 eV and electron densities of (0.7 – 4.0) × 10 22 cm –3 revealed a 30–400% disagreement with the calculated opacities [J. E. Bailey et al., Nature (London) 517, 56 (2015)]. The discrepancies have a high impact on astrophysics, atomic physics, and high-energy density physics, and it is important to verify our understanding of the experimental platform with simulations. Reliable simulations are challenging because the temporal and spatial evolution of the source radiation and of the sample plasma are both complex and incompletely diagnosed. In this article, we describe simulationsmore » that reproduce the measured temperature and density in recent iron opacity experiments performed at the Sandia National Laboratories Z facility. The time-dependent spectral irradiance at the sample is estimated using the measured time- and space-dependent source radiation distribution, in situ source-to-sample distance measurements, and a three-dimensional (3D) view-factor code. The inferred spectral irradiance is used to drive 1D sample radiation hydrodynamics simulations. The images recorded by slit-imaged space-resolved spectrometers are modeled by solving radiation transport of the source radiation through the sample. We find that the same drive radiation time history successfully reproduces the measured plasma conditions for eight different opacity experiments. These results provide a quantitative physical explanation for the observed dependence of both temperature and density on the sample configuration. Simulated spectral images for the experiments without the FeMg sample show quantitative agreement with the measured spectral images. The agreement in spectral profile, spatial profile, and brightness provides further confidence in our understanding of the backlight-radiation time history and image formation. Furthermore, these simulations bridge the static-uniform picture of the data interpretation and the dynamic-gradient reality of the experiments, and they will allow us to quantitatively assess the impact of effects neglected in the data interpretation.« less
Quantitative Analysis of Nail Polish Remover Using Nuclear Magnetic Resonance Spectroscopy Revisited
ERIC Educational Resources Information Center
Hoffmann, Markus M.; Caccamis, Joshua T.; Heitz, Mark P.; Schlecht, Kenneth D.
2008-01-01
Substantial modifications are presented for a previously described experiment using nuclear magnetic resonance (NMR) spectroscopy to quantitatively determine analytes in commercial nail polish remover. The revised experiment is intended for a second- or third-year laboratory course in analytical chemistry and can be conducted for larger laboratory…
The Vinyl Acetate Content of Packaging Film: A Quantitative Infrared Experiment.
ERIC Educational Resources Information Center
Allpress, K. N.; And Others
1981-01-01
Presents an experiment used in laboratory technician training courses to illustrate the quantitative use of infrared spectroscopy which is based on industrial and laboratory procedures for the determination of vinyl acetate levels in ethylene vinyl acetate packaging films. Includes three approaches to allow for varying path lengths (film…
Salisbury, Chris; Thomas, Clare; O'Cathain, Alicia; Rogers, Anne; Pope, Catherine; Yardley, Lucy; Hollinghurst, Sandra; Fahey, Tom; Lewis, Glyn; Large, Shirley; Edwards, Louisa; Rowsell, Alison; Segar, Julia; Brownsell, Simon; Montgomery, Alan A
2015-02-06
To develop a conceptual model for effective use of telehealth in the management of chronic health conditions, and to use this to develop and evaluate an intervention for people with two exemplar conditions: raised cardiovascular disease risk and depression. The model was based on several strands of evidence: a metareview and realist synthesis of quantitative and qualitative evidence on telehealth for chronic conditions; a qualitative study of patients' and health professionals' experience of telehealth; a quantitative survey of patients' interest in using telehealth; and review of existing models of chronic condition management and evidence-based treatment guidelines. Based on these evidence strands, a model was developed and then refined at a stakeholder workshop. Then a telehealth intervention ('Healthlines') was designed by incorporating strategies to address each of the model components. The model also provided a framework for evaluation of this intervention within parallel randomised controlled trials in the two exemplar conditions, and the accompanying process evaluations and economic evaluations. Primary care. The TElehealth in CHronic Disease (TECH) model proposes that attention to four components will offer interventions the best chance of success: (1) engagement of patients and health professionals, (2) effective chronic disease management (including subcomponents of self-management, optimisation of treatment, care coordination), (3) partnership between providers and (4) patient, social and health system context. Key intended outcomes are improved health, access to care, patient experience and cost-effective care. A conceptual model has been developed based on multiple sources of evidence which articulates how telehealth may best provide benefits for patients with chronic health conditions. It can be used to structure the design and evaluation of telehealth programmes which aim to be acceptable to patients and providers, and cost-effective. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
On buoyancy-driven natural ventilation of a room with a heated floor
NASA Astrophysics Data System (ADS)
Gladstone, Charlotte; Woods, Andrew W.
2001-08-01
The natural ventilation of a room, both with a heated floor and connected to a cold exterior through two openings, is investigated by combining quantitative models with analogue laboratory experiments. The heated floor generates an areal source of buoyancy while the openings allow displacement ventilation to operate. When combined, these produce a steady state in which the air in the room is well-mixed, and the heat provided by the floor equals the heat lost by displacement. We develop a quantitative model describing this process, in which the advective heat transfer through the openings is balanced with the heat flux supplied at the floor. This model is successfully tested with observations from small-scale analogue laboratory experiments. We compare our results with the steady-state flow associated with a point source of buoyancy: for a given applied heat flux, an areal source produces heated air of lower temperature but a greater volume flux of air circulates through the room. We generalize the model to account for the effects of (i) a cooled roof as well as a heated floor, and (ii) an external wind or temperature gradient. In the former case, the direction of the flow through the openings depends on the temperature of the exterior air relative to an averaged roof and floor temperature. In the latter case, the flow is either buoyancy dominated or wind dominated depending on the strength of the pressure associated with the wind. Furthermore, there is an intermediate multiple-solution regime in which either flow regime may develop.
Kumar, Ramya; Lahann, Joerg
2016-07-06
The performance of polymer interfaces in biology is governed by a wide spectrum of interfacial properties. With the ultimate goal of identifying design parameters for stem cell culture coatings, we developed a statistical model that describes the dependence of brush properties on surface-initiated polymerization (SIP) parameters. Employing a design of experiments (DOE) approach, we identified operating boundaries within which four gel architecture regimes can be realized, including a new regime of associated brushes in thin films. Our statistical model can accurately predict the brush thickness and the degree of intermolecular association of poly[{2-(methacryloyloxy) ethyl} dimethyl-(3-sulfopropyl) ammonium hydroxide] (PMEDSAH), a previously reported synthetic substrate for feeder-free and xeno-free culture of human embryonic stem cells. DOE-based multifunctional predictions offer a powerful quantitative framework for designing polymer interfaces. For example, model predictions can be used to decrease the critical thickness at which the wettability transition occurs by simply increasing the catalyst quantity from 1 to 3 mol %.
Low order physical models of vertical axis wind turbines
NASA Astrophysics Data System (ADS)
Craig, Anna; Dabiri, John; Koseff, Jeffrey
2016-11-01
In order to examine the ability of low-order physical models of vertical axis wind turbines to accurately reproduce key flow characteristics, experiments were conducted on rotating turbine models, rotating solid cylinders, and stationary porous flat plates (of both uniform and non-uniform porosities). From examination of the patterns of mean flow, the wake turbulence spectra, and several quantitative metrics, it was concluded that the rotating cylinders represent a reasonably accurate analog for the rotating turbines. In contrast, from examination of the patterns of mean flow, it was found that the porous flat plates represent only a limited analog for rotating turbines (for the parameters examined). These findings have implications for both laboratory experiments and numerical simulations, which have previously used analogous low order models in order to reduce experimental/computational costs. NSF GRF and SGF to A.C; ONR N000141211047 and the Gordon and Betty Moore Foundation Grant GBMF2645 to J.D.; and the Bob and Norma Street Environmental Fluid Mechanics Laboratory at Stanford University.
Cultural competence in mental health care: a review of model evaluations
Bhui, Kamaldeep; Warfa, Nasir; Edonya, Patricia; McKenzie, Kwame; Bhugra, Dinesh
2007-01-01
Background Cultural competency is now a core requirement for mental health professionals working with culturally diverse patient groups. Cultural competency training may improve the quality of mental health care for ethnic groups. Methods A systematic review that included evaluated models of professional education or service delivery. Results Of 109 potential papers, only 9 included an evaluation of the model to improve the cultural competency practice and service delivery. All 9 studies were located in North America. Cultural competency included modification of clinical practice and organizational performance. Few studies published their teaching and learning methods. Only three studies used quantitative outcomes. One of these showed a change in attitudes and skills of staff following training. The cultural consultation model showed evidence of significant satisfaction by clinicians using the service. No studies investigated service user experiences and outcomes. Conclusion There is limited evidence on the effectiveness of cultural competency training and service delivery. Further work is required to evaluate improvement in service users' experiences and outcomes. PMID:17266765
Walker, María Rosa; Zúñiga, Denisse; Triviño, Ximena
2012-05-01
Narrative medicine has showed to be a powerful instrument to reinforce relationships, identity, and self-knowledge among health professionals. Subjective issues have been recently recognized as relevant for faculty development in addition to the technical aspects. Since 2006 a creative writing workshop has been included as part of the Diploma in Medical Education at the medical school of the Pontificia Universidad Católica de Chile. To describe the experience and results of the creative writing workshop (2006-2010). Descriptive and retrospective study with a qualitative and quantitative design. Thirty-six teachers of the School of Medicine attended a 12-hour workshop. The Kirkpatrick model for evaluation of educational outcomes was used to report the data obtained in the course evaluation survey and in the stories produced. There were positive results at the four levels of Kirkpatrick evaluation model. The learning objectives of the workshop were achieved and 83 stories were created, compiled and published. The creative writing workshop can provide faculty with protected time for reflective practice about academic experiences and produce educational outcomes at different levels of the Kirkpatrick model.
Validation and Continued Development of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2016-10-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. An implementation of anisotropic viscosity, a feature observed to improve agreement between NIMROD simulations and experiment, will also be presented, along with investigations of flux conserver features and their impact on density control for future SIHI experiments. Work supported by DoE.
ANN-based calibration model of FTIR used in transformer online monitoring
NASA Astrophysics Data System (ADS)
Li, Honglei; Liu, Xian-yong; Zhou, Fangjie; Tan, Kexiong
2005-02-01
Recently, chromatography column and gas sensor have been used in online monitoring device of dissolved gases in transformer oil. But some disadvantages still exist in these devices: consumption of carrier gas, requirement of calibration, etc. Since FTIR has high accuracy, consume no carrier gas and require no calibration, the researcher studied the application of FTIR in such monitoring device. Experiments of "Flow gas method" were designed, and spectrum of mixture composed of different gases was collected with A BOMEM MB104 FTIR Spectrometer. A key question in the application of FTIR is that: the absorbance spectrum of 3 fault key gases, including C2H4, CH4 and C2H6, are overlapped seriously at 2700~3400cm-1. Because Absorbance Law is no longer appropriate, a nonlinear calibration model based on BP ANN was setup to in the quantitative analysis. The height absorbance of C2H4, CH4 and C2H6 were adopted as quantitative feature, and all the data were normalized before training the ANN. Computing results show that the calibration model can effectively eliminate the cross disturbance to measurement.
Think Pair Share with Formative Assessment for Junior High School Student
NASA Astrophysics Data System (ADS)
Pradana, O. R. Y.; Sujadi, I.; Pramudya, I.
2017-09-01
Geometry is a science related to abstract thinking ability so that not many students are able to understand this material well. In this case, the learning model plays a crucial role in improving student achievement. This means that a less precise learning model will cause difficulties for students. Therefore, this study provides a quantitative explanation of the Think Pair Share learning model combined with the formative assessment. This study aims to test the Think Pair Share with the formative assessment on junior high school students. This research uses a quantitative approach of Pretest-Posttest in control group and experiment group. ANOVA test and Scheffe test used to analyse the effectiveness this learning. Findings in this study are student achievement on the material geometry with Think Pair Share using formative assessment has increased significantly. This happens probably because this learning makes students become more active during learning. Hope in the future, Think Pair Share with formative assessment be a useful learning for teachers and this learning applied by the teacher around the world especially on the material geometry.
Topex/Poseidon: A United States/France mission. Oceanography from space: The oceans and climate
NASA Technical Reports Server (NTRS)
1992-01-01
The TOPEX/POSEIDON space mission, sponsored by NASA and France's space agency, the Centre National d'Etudes Spatiales (CNES), will give new observations of the Earth from space to gain a quantitative understanding of the role of ocean currents in climate change. Rising atmospheric concentrations of carbon dioxide and other 'greenhouse gases' produced as a result of human activities could generate a global warming, followed by an associated rise in sea level. The satellite will use radar altimetry to measure sea-surface height and will be tracked by three independent systems to yield accurate topographic maps over the dimensions of entire ocean basins. The satellite data, together with the Tropical Ocean and Global Atmosphere (TOGA) program and the World Ocean Circulation Experiment (WOCE) measurements, will be analyzed by an international scientific team. By merging the satellite observations with TOGA and WOCE findings, the scientists will establish the extensive data base needed for the quantitative description and computer modeling of ocean circulation. The ocean models will eventually be coupled with atmospheric models to lay the foundation for predictions of global climate change.
NASA Astrophysics Data System (ADS)
Saadatmand, S. N.; Bartlett, S. D.; McCulloch, I. P.
2018-04-01
Obtaining quantitative ground-state behavior for geometrically-frustrated quantum magnets with long-range interactions is challenging for numerical methods. Here, we demonstrate that the ground states of these systems on two-dimensional lattices can be efficiently obtained using state-of-the-art translation-invariant variants of matrix product states and density-matrix renormalization-group algorithms. We use these methods to calculate the fully-quantitative ground-state phase diagram of the long-range interacting triangular Ising model with a transverse field on six-leg infinite-length cylinders and scrutinize the properties of the detected phases. We compare these results with those of the corresponding nearest neighbor model. Our results suggest that, for such long-range Hamiltonians, the long-range quantum fluctuations always lead to long-range correlations, where correlators exhibit power-law decays instead of the conventional exponential drops observed for short-range correlated gapped phases. Our results are relevant for comparisons with recent ion-trap quantum simulator experiments that demonstrate highly-controllable long-range spin couplings for several hundred ions.
Quantitative two-dimensional HSQC experiment for high magnetic field NMR spectrometers
NASA Astrophysics Data System (ADS)
Koskela, Harri; Heikkilä, Outi; Kilpeläinen, Ilkka; Heikkinen, Sami
2010-01-01
The finite RF power available on carbon channel in proton-carbon correlation experiments leads to non-uniform cross peak intensity response across carbon chemical shift range. Several classes of broadband pulses are available that alleviate this problem. Adiabatic pulses provide an excellent magnetization inversion over a large bandwidth, and very recently, novel phase-modulated pulses have been proposed that perform 90° and 180° magnetization rotations with good offset tolerance. Here, we present a study how these broadband pulses (adiabatic and phase-modulated) can improve quantitative application of the heteronuclear single quantum coherence (HSQC) experiment on high magnetic field strength NMR spectrometers. Theoretical and experimental examinations of the quantitative, offset-compensated, CPMG-adjusted HSQC (Q-OCCAHSQC) experiment are presented. The proposed experiment offers a formidable improvement to the offset performance; 13C offset-dependent standard deviation of the peak intensity was below 6% in range of ±20 kHz. This covers the carbon chemical shift range of 150 ppm, which contains the protonated carbons excluding the aldehydes, for 22.3 T NMR magnets. A demonstration of the quantitative analysis of a fasting blood plasma sample obtained from a healthy volunteer is given.
Mechanistic analysis of challenge-response experiments.
Shotwell, M S; Drake, K J; Sidorov, V Y; Wikswo, J P
2013-09-01
We present an application of mechanistic modeling and nonlinear longitudinal regression in the context of biomedical response-to-challenge experiments, a field where these methods are underutilized. In this type of experiment, a system is studied by imposing an experimental challenge, and then observing its response. The combination of mechanistic modeling and nonlinear longitudinal regression has brought new insight, and revealed an unexpected opportunity for optimal design. Specifically, the mechanistic aspect of our approach enables the optimal design of experimental challenge characteristics (e.g., intensity, duration). This article lays some groundwork for this approach. We consider a series of experiments wherein an isolated rabbit heart is challenged with intermittent anoxia. The heart responds to the challenge onset, and recovers when the challenge ends. The mean response is modeled by a system of differential equations that describe a candidate mechanism for cardiac response to anoxia challenge. The cardiac system behaves more variably when challenged than when at rest. Hence, observations arising from this experiment exhibit complex heteroscedasticity and sharp changes in central tendency. We present evidence that an asymptotic statistical inference strategy may fail to adequately account for statistical uncertainty. Two alternative methods are critiqued qualitatively (i.e., for utility in the current context), and quantitatively using an innovative Monte-Carlo method. We conclude with a discussion of the exciting opportunities in optimal design of response-to-challenge experiments. © 2013, The International Biometric Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
2017-11-29
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less
NASA Astrophysics Data System (ADS)
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
2017-11-01
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.
NASA Astrophysics Data System (ADS)
Russell, Greg
The work described in this dissertation was motivated by a desire to better understand the cellular pathology of ischemic stroke. Two of the three bodies of research presented herein address and issue directly related to the investigation of ischemic stroke through the use of diffusion weighted magnetic resonance imaging (DWMRI) methods. The first topic concerns the development of a computationally efficient finite difference method, designed to evaluate the impact of microscopic tissue properties on the formation of DWMRI signal. For the second body of work, the effect of changing the intrinsic diffusion coefficient of a restricted sample on clinical DWMRI experiments is explored. The final body of work, while motivated by the desire to understand stroke, addresses the issue of acquiring large amounts of MRI data well suited for quantitative analysis in reduced scan time. In theory, the method could be used to generate quantitative parametric maps, including those depicting information gleaned through the use of DWMRI methods. Chapter 1 provides an introduction to several topics. A description of the use of DWMRI methods in the study of ischemic stroke is covered. An introduction to the fundamental physical principles at work in MRI is also provided. In this section the means by which magnetization is created in MRI experiments, how MRI signal is induced, as well as the influence of spin-spin and spin-lattice relaxation are discussed. Attention is also given to describing how MRI measurements can be sensitized to diffusion through the use of qualitative and quantitative descriptions of the process. Finally, the reader is given a brief introduction to the use of numerical methods for solving partial differential equations. In Chapters 2, 3 and 4, three related bodies of research are presented in terms of research papers. In Chapter 2, a novel computational method is described. The method reduces the computation resources required to simulate DWMRI experiments. In Chapter 3, a detailed study on how changes in the intrinsic intracellular diffusion coefficient may influence clinical DWMRI experiments is described. In Chapter 4, a novel, non-steady state quantitative MRI method is described.
Hurricanes and Climate: The U.S. CLIVAR Working Group on Hurricanes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Kevin J. E.; Camargo, Suzana J.; Vecchi, Gabriel A.
While a quantitative climate theory of tropical cyclone formation remains elusive, considerable progress has been made recently in our ability to simulate tropical cyclone climatologies and to understand the relationship between climate and tropical cyclone formation. Climate models are now able to simulate a realistic rate of global tropical cyclone formation, although simulation of the Atlantic tropical cyclone climatology remains challenging unless horizontal resolutions finer than 50 km are employed. This article summarizes published research from the idealized experiments of the Hurricane Working Group of U.S. Climate and Ocean: Variability, Predictability and Change (CLIVAR). This work, combined with results frommore » other model simulations, has strengthened relationships between tropical cyclone formation rates and climate variables such as midtropospheric vertical velocity, with decreased climatological vertical velocities leading to decreased tropical cyclone formation. Systematic differences are shown between experiments in which only sea surface temperature is increased compared with experiments where only atmospheric carbon dioxide is increased. Experiments where only carbon dioxide is increased are more likely to demonstrate a decrease in tropical cyclone numbers, similar to the decreases simulated by many climate models for a future, warmer climate. Experiments where the two effects are combined also show decreases in numbers, but these tend to be less for models that demonstrate a strong tropical cyclone response to increased sea surface temperatures. Lastly, further experiments are proposed that may improve our understanding of the relationship between climate and tropical cyclone formation, including experiments with two-way interaction between the ocean and the atmosphere and variations in atmospheric aerosols.« less
Hurricanes and Climate: The U.S. CLIVAR Working Group on Hurricanes
Walsh, Kevin J. E.; Camargo, Suzana J.; Vecchi, Gabriel A.; ...
2015-06-01
While a quantitative climate theory of tropical cyclone formation remains elusive, considerable progress has been made recently in our ability to simulate tropical cyclone climatologies and to understand the relationship between climate and tropical cyclone formation. Climate models are now able to simulate a realistic rate of global tropical cyclone formation, although simulation of the Atlantic tropical cyclone climatology remains challenging unless horizontal resolutions finer than 50 km are employed. This article summarizes published research from the idealized experiments of the Hurricane Working Group of U.S. Climate and Ocean: Variability, Predictability and Change (CLIVAR). This work, combined with results frommore » other model simulations, has strengthened relationships between tropical cyclone formation rates and climate variables such as midtropospheric vertical velocity, with decreased climatological vertical velocities leading to decreased tropical cyclone formation. Systematic differences are shown between experiments in which only sea surface temperature is increased compared with experiments where only atmospheric carbon dioxide is increased. Experiments where only carbon dioxide is increased are more likely to demonstrate a decrease in tropical cyclone numbers, similar to the decreases simulated by many climate models for a future, warmer climate. Experiments where the two effects are combined also show decreases in numbers, but these tend to be less for models that demonstrate a strong tropical cyclone response to increased sea surface temperatures. Lastly, further experiments are proposed that may improve our understanding of the relationship between climate and tropical cyclone formation, including experiments with two-way interaction between the ocean and the atmosphere and variations in atmospheric aerosols.« less
Mechanistic Understanding of Microbial Plugging for Improved Sweep Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steven Bryant; Larry Britton
2008-09-30
Microbial plugging has been proposed as an effective low cost method of permeability reduction. Yet there is a dearth of information on the fundamental processes of microbial growth in porous media, and there are no suitable data to model the process of microbial plugging as it relates to sweep efficiency. To optimize the field implementation, better mechanistic and volumetric understanding of biofilm growth within a porous medium is needed. In particular, the engineering design hinges upon a quantitative relationship between amount of nutrient consumption, amount of growth, and degree of permeability reduction. In this project experiments were conducted to obtainmore » new data to elucidate this relationship. Experiments in heterogeneous (layered) beadpacks showed that microbes could grow preferentially in the high permeability layer. Ultimately this caused flow to be equally divided between high and low permeability layers, precisely the behavior needed for MEOR. Remarkably, classical models of microbial nutrient uptake in batch experiments do not explain the nutrient consumption by the same microbes in flow experiments. We propose a simple extension of classical kinetics to account for the self-limiting consumption of nutrient observed in our experiments, and we outline a modeling approach based on architecture and behavior of biofilms. Such a model would account for the changing trend of nutrient consumption by bacteria with the increasing biomass and the onset of biofilm formation. However no existing model can explain the microbial preference for growth in high permeability regions, nor is there any obvious extension of the model for this observation. An attractive conjecture is that quorum sensing is involved in the heterogeneous bead packs.« less
Modelling the Active Hearing Process in Mosquitoes
NASA Astrophysics Data System (ADS)
Avitabile, Daniele; Homer, Martin; Jackson, Joe; Robert, Daniel; Champneys, Alan
2011-11-01
A simple microscopic mechanistic model is described of the active amplification within the Johnston's organ of the mosquito species Toxorhynchites brevipalpis. The model is based on the description of the antenna as a forced-damped oscillator coupled to a set of active threads (ensembles of scolopidia) that provide an impulsive force when they twitch. This twitching is in turn controlled by channels that are opened and closed if the antennal oscillation reaches a critical amplitude. The model matches both qualitatively and quantitatively with recent experiments. New results are presented using mathematical homogenization techniques to derive a mesoscopic model as a simple oscillator with nonlinear force and damping characteristics. It is shown how the results from this new model closely resemble those from the microscopic model as the number of threads approach physiologically correct values.
Taraji, Maryam; Haddad, Paul R; Amos, Ruth I J; Talebi, Mohammad; Szucs, Roman; Dolan, John W; Pohl, Chris A
2017-02-07
A design-of-experiment (DoE) model was developed, able to describe the retention times of a mixture of pharmaceutical compounds in hydrophilic interaction liquid chromatography (HILIC) under all possible combinations of acetonitrile content, salt concentration, and mobile-phase pH with R 2 > 0.95. Further, a quantitative structure-retention relationship (QSRR) model was developed to predict retention times for new analytes, based only on their chemical structures, with a root-mean-square error of prediction (RMSEP) as low as 0.81%. A compound classification based on the concept of similarity was applied prior to QSRR modeling. Finally, we utilized a combined QSRR-DoE approach to propose an optimal design space in a quality-by-design (QbD) workflow to facilitate the HILIC method development. The mathematical QSRR-DoE model was shown to be highly predictive when applied to an independent test set of unseen compounds in unseen conditions with a RMSEP value of 5.83%. The QSRR-DoE computed retention time of pharmaceutical test analytes and subsequently calculated separation selectivity was used to optimize the chromatographic conditions for efficient separation of targets. A Monte Carlo simulation was performed to evaluate the risk of uncertainty in the model's prediction, and to define the design space where the desired quality criterion was met. Experimental realization of peak selectivity between targets under the selected optimal working conditions confirmed the theoretical predictions. These results demonstrate how discovery of optimal conditions for the separation of new analytes can be accelerated by the use of appropriate theoretical tools.
Logic integer programming models for signaling networks.
Haus, Utz-Uwe; Niermann, Kathrin; Truemper, Klaus; Weismantel, Robert
2009-05-01
We propose a static and a dynamic approach to model biological signaling networks, and show how each can be used to answer relevant biological questions. For this, we use the two different mathematical tools of Propositional Logic and Integer Programming. The power of discrete mathematics for handling qualitative as well as quantitative data has so far not been exploited in molecular biology, which is mostly driven by experimental research, relying on first-order or statistical models. The arising logic statements and integer programs are analyzed and can be solved with standard software. For a restricted class of problems the logic models reduce to a polynomial-time solvable satisfiability algorithm. Additionally, a more dynamic model enables enumeration of possible time resolutions in poly-logarithmic time. Computational experiments are included.
A Collaborative Molecular Modeling Environment Using a Virtual Tunneling Service
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments. PMID:22927721
Klein, Hans-Ulrich; Ruckert, Christian; Kohlmann, Alexander; Bullinger, Lars; Thiede, Christian; Haferlach, Torsten; Dugas, Martin
2009-12-15
Multiple gene expression signatures derived from microarray experiments have been published in the field of leukemia research. A comparison of these signatures with results from new experiments is useful for verification as well as for interpretation of the results obtained. Currently, the percentage of overlapping genes is frequently used to compare published gene signatures against a signature derived from a new experiment. However, it has been shown that the percentage of overlapping genes is of limited use for comparing two experiments due to the variability of gene signatures caused by different array platforms or assay-specific influencing parameters. Here, we present a robust approach for a systematic and quantitative comparison of published gene expression signatures with an exemplary query dataset. A database storing 138 leukemia-related published gene signatures was designed. Each gene signature was manually annotated with terms according to a leukemia-specific taxonomy. Two analysis steps are implemented to compare a new microarray dataset with the results from previous experiments stored and curated in the database. First, the global test method is applied to assess gene signatures and to constitute a ranking among them. In a subsequent analysis step, the focus is shifted from single gene signatures to chromosomal aberrations or molecular mutations as modeled in the taxonomy. Potentially interesting disease characteristics are detected based on the ranking of gene signatures associated with these aberrations stored in the database. Two example analyses are presented. An implementation of the approach is freely available as web-based application. The presented approach helps researchers to systematically integrate the knowledge derived from numerous microarray experiments into the analysis of a new dataset. By means of example leukemia datasets we demonstrate that this approach detects related experiments as well as related molecular mutations and may help to interpret new microarray data.
Ion Counting from Explicit-Solvent Simulations and 3D-RISM
Giambaşu, George M.; Luchko, Tyler; Herschlag, Daniel; York, Darrin M.; Case, David A.
2014-01-01
The ionic atmosphere around nucleic acids remains only partially understood at atomic-level detail. Ion counting (IC) experiments provide a quantitative measure of the ionic atmosphere around nucleic acids and, as such, are a natural route for testing quantitative theoretical approaches. In this article, we replicate IC experiments involving duplex DNA in NaCl(aq) using molecular dynamics (MD) simulation, the three-dimensional reference interaction site model (3D-RISM), and nonlinear Poisson-Boltzmann (NLPB) calculations and test against recent buffer-equilibration atomic emission spectroscopy measurements. Further, we outline the statistical mechanical basis for interpreting IC experiments and clarify the use of specific concentration scales. Near physiological concentrations, MD simulation and 3D-RISM estimates are close to experimental results, but at higher concentrations (>0.7 M), both methods underestimate the number of condensed cations and overestimate the number of excluded anions. The effect of DNA charge on ion and water atmosphere extends 20–25 Å from its surface, yielding layered density profiles. Overall, ion distributions from 3D-RISMs are relatively close to those from corresponding MD simulations, but with less Na+ binding in grooves and tighter binding to phosphates. NLPB calculations, on the other hand, systematically underestimate the number of condensed cations at almost all concentrations and yield nearly structureless ion distributions that are qualitatively distinct from those generated by both MD simulation and 3D-RISM. These results suggest that MD simulation and 3D-RISM may be further developed to provide quantitative insight into the characterization of the ion atmosphere around nucleic acids and their effect on structure and stability. PMID:24559991
A comparison of quantitative methods for clinical imaging with hyperpolarized (13)C-pyruvate.
Daniels, Charlie J; McLean, Mary A; Schulte, Rolf F; Robb, Fraser J; Gill, Andrew B; McGlashan, Nicholas; Graves, Martin J; Schwaiger, Markus; Lomas, David J; Brindle, Kevin M; Gallagher, Ferdia A
2016-04-01
Dissolution dynamic nuclear polarization (DNP) enables the metabolism of hyperpolarized (13)C-labelled molecules, such as the conversion of [1-(13)C]pyruvate to [1-(13)C]lactate, to be dynamically and non-invasively imaged in tissue. Imaging of this exchange reaction in animal models has been shown to detect early treatment response and correlate with tumour grade. The first human DNP study has recently been completed, and, for widespread clinical translation, simple and reliable methods are necessary to accurately probe the reaction in patients. However, there is currently no consensus on the most appropriate method to quantify this exchange reaction. In this study, an in vitro system was used to compare several kinetic models, as well as simple model-free methods. Experiments were performed using a clinical hyperpolarizer, a human 3 T MR system, and spectroscopic imaging sequences. The quantitative methods were compared in vivo by using subcutaneous breast tumours in rats to examine the effect of pyruvate inflow. The two-way kinetic model was the most accurate method for characterizing the exchange reaction in vitro, and the incorporation of a Heaviside step inflow profile was best able to describe the in vivo data. The lactate time-to-peak and the lactate-to-pyruvate area under the curve ratio were simple model-free approaches that accurately represented the full reaction, with the time-to-peak method performing indistinguishably from the best kinetic model. Finally, extracting data from a single pixel was a robust and reliable surrogate of the whole region of interest. This work has identified appropriate quantitative methods for future work in the analysis of human hyperpolarized (13)C data. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
Total protein analysis as a reliable loading control for quantitative fluorescent Western blotting.
Eaton, Samantha L; Roche, Sarah L; Llavero Hurtado, Maica; Oldknow, Karla J; Farquharson, Colin; Gillingwater, Thomas H; Wishart, Thomas M
2013-01-01
Western blotting has been a key technique for determining the relative expression of proteins within complex biological samples since the first publications in 1979. Recent developments in sensitive fluorescent labels, with truly quantifiable linear ranges and greater limits of detection, have allowed biologists to probe tissue specific pathways and processes with higher resolution than ever before. However, the application of quantitative Western blotting (QWB) to a range of healthy tissues and those from degenerative models has highlighted a problem with significant consequences for quantitative protein analysis: how can researchers conduct comparative expression analyses when many of the commonly used reference proteins (e.g. loading controls) are differentially expressed? Here we demonstrate that common controls, including actin and tubulin, are differentially expressed in tissues from a wide range of animal models of neurodegeneration. We highlight the prevalence of such alterations through examination of published "-omics" data, and demonstrate similar responses in sensitive QWB experiments. For example, QWB analysis of spinal cord from a murine model of Spinal Muscular Atrophy using an Odyssey scanner revealed that beta-actin expression was decreased by 19.3±2% compared to healthy littermate controls. Thus, normalising QWB data to β-actin in these circumstances could result in 'skewing' of all data by ∼20%. We further demonstrate that differential expression of commonly used loading controls was not restricted to the nervous system, but was also detectable across multiple tissues, including bone, fat and internal organs. Moreover, expression of these "control" proteins was not consistent between different portions of the same tissue, highlighting the importance of careful and consistent tissue sampling for QWB experiments. Finally, having illustrated the problem of selecting appropriate single protein loading controls, we demonstrate that normalisation using total protein analysis on samples run in parallel with stains such as Coomassie blue provides a more robust approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel
2014-01-15
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less
THE POSITIVITY OFFSET THEORY OF ANHEDONIA IN SCHIZOPHRENIA.
Strauss, Gregory P; Frost, Katherine H; Lee, Bern G; Gold, James M
2017-03-01
Prior studies have concluded that schizophrenia patients are not anhedonic because they do not report reduced experience of positive emotion to pleasant stimuli. The current study challenged this view by applying quantitative methods validated in the Evaluative Space Model of emotional experience to test the hypothesis that schizophrenia patients evidence a reduction in the normative "positivity offset" (i.e., the tendency to experience higher levels of positive than negative emotional output when stimulus input is absent or weak). Participants included 76 schizophrenia patients and 60 healthy controls who completed an emotional experience task that required reporting the level of positive emotion, negative emotion, and arousal to photographs. Results indicated that although schizophrenia patients evidenced intact capacity to experience positive emotion at high levels of stimulus input, they displayed a diminished positivity offset. Reductions in the positivity offset may underlie volitional disturbance, limiting approach behaviors toward novel stimuli in neutral environments.
THE POSITIVITY OFFSET THEORY OF ANHEDONIA IN SCHIZOPHRENIA
Strauss, Gregory P.; Frost, Katherine H.; Lee, Bern G.; Gold, James M.
2016-01-01
Prior studies have concluded that schizophrenia patients are not anhedonic because they do not report reduced experience of positive emotion to pleasant stimuli. The current study challenged this view by applying quantitative methods validated in the Evaluative Space Model of emotional experience to test the hypothesis that schizophrenia patients evidence a reduction in the normative “positivity offset” (i.e., the tendency to experience higher levels of positive than negative emotional output when stimulus input is absent or weak). Participants included 76 schizophrenia patients and 60 healthy controls who completed an emotional experience task that required reporting the level of positive emotion, negative emotion, and arousal to photographs. Results indicated that although schizophrenia patients evidenced intact capacity to experience positive emotion at high levels of stimulus input, they displayed a diminished positivity offset. Reductions in the positivity offset may underlie volitional disturbance, limiting approach behaviors toward novel stimuli in neutral environments. PMID:28497008
Knoll, Florian; Raya, José G; Halloran, Rafael O; Baete, Steven; Sigmund, Eric; Bammer, Roland; Block, Tobias; Otazo, Ricardo; Sodickson, Daniel K
2015-01-01
Radial spin echo diffusion imaging allows motion-robust imaging of tissues with very low T2 values like articular cartilage with high spatial resolution and signal-to-noise ratio (SNR). However, in vivo measurements are challenging due to the significantly slower data acquisition speed of spin-echo sequences and the less efficient k-space coverage of radial sampling, which raises the demand for accelerated protocols by means of undersampling. This work introduces a new reconstruction approach for undersampled DTI. A model-based reconstruction implicitly exploits redundancies in the diffusion weighted images by reducing the number of unknowns in the optimization problem and compressed sensing is performed directly in the target quantitative domain by imposing a Total Variation (TV) constraint on the elements of the diffusion tensor. Experiments were performed for an anisotropic phantom and the knee and brain of healthy volunteers (3 and 2 volunteers, respectively). Evaluation of the new approach was conducted by comparing the results to reconstructions performed with gridding, combined parallel imaging and compressed sensing, and a recently proposed model-based approach. The experiments demonstrated improvement in terms of reduction of noise and streaking artifacts in the quantitative parameter maps as well as a reduction of angular dispersion of the primary eigenvector when using the proposed method, without introducing systematic errors into the maps. This may enable an essential reduction of the acquisition time in radial spin echo diffusion tensor imaging without degrading parameter quantification and/or SNR. PMID:25594167
A unified model of density limit in fusion plasmas
NASA Astrophysics Data System (ADS)
Zanca, P.; Sattin, F.; Escande, D. F.; Pucella, G.; Tudisco, O.
2017-05-01
In this work we identify by analytical and numerical means the conditions for the existence of a magnetic and thermal equilibrium of a cylindrical plasma, in the presence of Ohmic and/or additional power sources, heat conduction and radiation losses by light impurities. The boundary defining the solutions’ space having realistic temperature profile with small edge value takes mathematically the form of a density limit (DL). Compared to previous similar analyses the present work benefits from dealing with a more accurate set of equations. This refinement is elementary, but decisive, since it discloses a tenuous dependence of the DL on the thermal transport for configurations with an applied electric field. Thanks to this property, the DL scaling law is recovered almost identical for two largely different devices such as the ohmic tokamak and the reversed field pinch. In particular, they have in common a Greenwald scaling, linearly depending on the plasma current, quantitatively consistent with experimental results. In the tokamak case the DL dependence on any additional heating approximately follows a 0.5 power law, which is compatible with L-mode experiments. For a purely externally heated configuration, taken as a cylindrical approximation of the stellarator, the DL dependence on transport is found stronger. By adopting suitable transport models, DL takes on a Sudo-like form, in fair agreement with LHD experiments. Overall, the model provides a good zeroth-order quantitative description of the DL, applicable to widely different configurations.
Revealing interaction mode between HIV-1 protease and mannitol analog inhibitor.
Yan, Guan-Wen; Chen, Yue; Li, Yixue; Chen, Hai-Feng
2012-06-01
HIV protease is a key enzyme to play a key role in the HIV-1 replication cycle and control the maturation from HIV viruses to an infectious virion. HIV-1 protease has become an important target for anti-HIV-1 drug development. Here, we used molecular dynamics simulation to study the binding mode between mannitol derivatives and HIV-1 protease. The results suggest that the most active compound (M35) has more stable hydrogen bonds and stable native contacts than the less active one (M17). These mannitol derivatives might have similar interaction mode with HIV-1 protease. Then, 3D-QSAR was used to construct quantitative structure-activity models. The cross-validated q(2) values are found as 0.728 and 0.611 for CoMFA and CoMSIA, respectively. And the non-cross-validated r(2) values are 0.973 and 0.950. Nine test set compounds validate the model. The results show that this model possesses better prediction ability than the previous work. This model can be used to design new chemical entities and make quantitative prediction of the bioactivities for HIV-1 protease inhibitors before resorting to in vitro and in vivo experiment. © 2012 John Wiley & Sons A/S.
Quantitative theory of driven nonlinear brain dynamics.
Roberts, J A; Robinson, P A
2012-09-01
Strong periodic stimuli such as bright flashing lights evoke nonlinear responses in the brain and interact nonlinearly with ongoing cortical activity, but the underlying mechanisms for these phenomena are poorly understood at present. The dominant features of these experimentally observed dynamics are reproduced by the dynamics of a quantitative neural field model subject to periodic drive. Model power spectra over a range of drive frequencies show agreement with multiple features of experimental measurements, exhibiting nonlinear effects including entrainment over a range of frequencies around the natural alpha frequency f(α), subharmonic entrainment near 2f(α), and harmonic generation. Further analysis of the driven dynamics as a function of the drive parameters reveals rich nonlinear dynamics that is predicted to be observable in future experiments at high drive amplitude, including period doubling, bistable phase-locking, hysteresis, wave mixing, and chaos indicated by positive Lyapunov exponents. Moreover, photosensitive seizures are predicted for physiologically realistic model parameters yielding bistability between healthy and seizure dynamics. These results demonstrate the applicability of neural field models to the new regime of periodically driven nonlinear dynamics, enabling interpretation of experimental data in terms of specific generating mechanisms and providing new tests of the theory. Copyright © 2012 Elsevier Inc. All rights reserved.
Mechanisms, functions and ecology of colour vision in the honeybee.
Hempel de Ibarra, N; Vorobyev, M; Menzel, R
2014-06-01
Research in the honeybee has laid the foundations for our understanding of insect colour vision. The trichromatic colour vision of honeybees shares fundamental properties with primate and human colour perception, such as colour constancy, colour opponency, segregation of colour and brightness coding. Laborious efforts to reconstruct the colour vision pathway in the honeybee have provided detailed descriptions of neural connectivity and the properties of photoreceptors and interneurons in the optic lobes of the bee brain. The modelling of colour perception advanced with the establishment of colour discrimination models that were based on experimental data, the Colour-Opponent Coding and Receptor Noise-Limited models, which are important tools for the quantitative assessment of bee colour vision and colour-guided behaviours. Major insights into the visual ecology of bees have been gained combining behavioural experiments and quantitative modelling, and asking how bee vision has influenced the evolution of flower colours and patterns. Recently research has focussed on the discrimination and categorisation of coloured patterns, colourful scenes and various other groupings of coloured stimuli, highlighting the bees' behavioural flexibility. The identification of perceptual mechanisms remains of fundamental importance for the interpretation of their learning strategies and performance in diverse experimental tasks.
The Upper Atmosphere Research Satellite: From Coffee Table Art to Quantitative Science
NASA Technical Reports Server (NTRS)
Douglass, Anne R.
1999-01-01
The Upper Atmosphere Research Satellite (UARS) has provided an unprecedented set of observations of constituents of the stratosphere. When used in combination with data from other sources and appropriate modeling tools, these observations are useful for quantitative evaluation of stratospheric photochemical processes. This is illustrated by comparing ozone observations from airborne Differential Absorption Lidar (DIAL), from the Polar Ozone and Aerosol Measurement (POAM), from the Microwave Limb Sounder (MLS), and from the Halogen occultation Experiment (HALOE) with ozone fields generated with a three dimensional model. For 1995-96, at polar latitudes, observations from DIAL flights on December 9 and January 30, and POAM and MLS between late December and late January are compared with ozone fields from the GSFC 3D chemistry and transport model. Data from the three platforms consistently show that the observed ozone has a negative trend relative to the modeled ozone, and that the trend is uniform in time between early and mid winter, with no obvious dependence on proximity to the vortex edge. The importance of chlorine catalyzed photochemistry to this ozone loss is explored by comparing observations from MLS and HALOE with simulations for other northern winters, particularly 1997-98.
Modeling the Afferent Dynamics of the Baroreflex Control System
Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.
2013-01-01
In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya
2017-12-01
Quantitative photoacoustic tomography (QPAT) employing a light propagation model will play an important role in medical diagnoses by quantifying the concentration of hemoglobin or a contrast agent. However, QPAT by the light propagation model with the three-dimensional (3D) radiative transfer equation (RTE) requires a huge computational load in the iterative forward calculations involved in the updating process to reconstruct the absorption coefficient. The approximations of the light propagation improve the efficiency of the image reconstruction for the QPAT. In this study, we compared the 3D/two-dimensional (2D) photon diffusion equation (PDE) approximating 3D RTE with the Monte Carlo simulation based on 3D RTE. Then, the errors in a 2D PDE-based linearized image reconstruction caused by the approximations were quantitatively demonstrated and discussed in the numerical simulations. It was clearly observed that the approximations affected the reconstructed absorption coefficient. The 2D PDE-based linearized algorithm succeeded in the image reconstruction of the region with a large absorption coefficient in the 3D phantom. The value reconstructed in the phantom experiment agreed with that in the numerical simulation, so that it was validated that the numerical simulation of the image reconstruction predicted the relationship between the true absorption coefficient of the target in the 3D medium and the reconstructed value with the 2D PDE-based linearized algorithm. Moreover, the the true absorption coefficient in 3D medium was estimated from the 2D reconstructed image on the basis of the prediction by the numerical simulation. The estimation was successful in the phantom experiment, although some limitations were revealed.
3D analysis of bone formation around titanium implants using micro-computed tomography (μCT)
NASA Astrophysics Data System (ADS)
Bernhardt, Ricardo; Scharnweber, Dieter; Müller, Bert; Beckmann, Felix; Goebbels, Jürgen; Jansen, John; Schliephake, Henning; Worch, Hartmut
2006-08-01
The quantitative analysis of bone formation around biofunctionalised metallic implants is an important tool for the further development of implants with higher success rates. This is, nowadays, especially important in cases of additional diseases like diabetes or osteoporosis. Micro computed tomography (μCT), as non-destructive technique, offers the possibility for quantitative three-dimensional recording of bone close to the implant's surface with micrometer resolution, which is the range of the relevant bony structures. Within different animal models using cylindrical and screw-shaped Ti6Al4V implants we have compared visualization and quantitative analysis of newly formed bone by the use of synchrotron-radiation-based CT-systems in comparison with histological findings. The SRμCT experiments were performed at the beamline BW 5 (HASYLAB at DESY, Hamburg, Germany; at the BAMline (BESSY, Berlin, Germany). For the experiments, PMMA-embedded samples were prepared with diameters of about 8 mm, which contain in the center the implant surrounded by the bony tissue. To (locally) quantify the bone formation, models were developed and optimized. The comparison of the results obtained by SRμCT and histology demonstrates the advantages and disadvantages of both approaches, although the bone formation values for the different biofunctionalized implants are identical within the error bars. SRμCT allows the clear identification of fully mineralized bone around the different titanium implants. As hundreds of virtual slices were easily generated for the individual samples, the quantification and interactive bone detection led to conclusions of high precision and statistical relevance. In this way, SRμCT in combination with interactive data analysis is proven to be more significant with respect to classical histology.
The experiences of relatives with the practice of palliative sedation: a systematic review.
Bruinsma, Sophie M; Rietjens, Judith A C; Seymour, Jane E; Anquinet, Livia; van der Heide, Agnes
2012-09-01
Guidelines about palliative sedation typically include recommendations to protect the well-being of relatives. The aim of this study was to systematically review evidence on the experiences of relatives with the practice of palliative sedation. PubMed, Embase, Web of Science, PsycINFO, and CINAHL were searched for empirical studies on relatives' experiences with palliative sedation. We investigated relatives' involvement in the decision-making and sedation processes, whether they received adequate information and support, and relatives' emotions. Of the 564 studies identified, 39 were included. The studies (30 quantitative, six qualitative, and three mixed methods) were conducted in 16 countries; three studies were based on relatives' reports, 26 on physicians' and nurses' proxy reports, seven on medical records, and three combined different sources. The 39 studies yielded a combined total of 8791 respondents or studied cases. Caregivers involved relatives in the decision making in 69%-100% of all cases (19 quantitative studies), and in 60%-100% of all cases, relatives were reported to have received adequate information (five quantitative studies). Only two quantitative studies reported on relatives' involvement in the provision of sedation. Despite the fact that the majority of relatives were reported to be comfortable with the use of palliative sedation (seven quantitative studies, four qualitative studies), several studies found that relatives were distressed by the use of sedation (five quantitative studies, five qualitative studies). No studies reported specifically about the support provided to the relatives. Relatives' experiences with palliative sedation are mainly studied from the perspective of proxies, mostly professional caregivers. The majority of relatives seems to be comfortable with the use of palliative sedation; however, they may experience substantial distress by the use of sedation. Copyright © 2012 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Dark Field Microscopy for Analytical Laboratory Courses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augspurger, Ashley E; Stender, Anthony S; Marchuk, Kyle
2014-06-10
An innovative and inexpensive optical microscopy experiment for a quantitative analysis or an instrumental analysis chemistry course is described. The students have hands-on experience with a dark field microscope and investigate the wavelength dependence of localized surface plasmon resonance in gold and silver nanoparticles. Students also observe and measure individual crystal growth during a replacement reaction between copper and silver nitrate. The experiment allows for quantitative, qualitative, and image data analyses for undergraduate students.
Choice of Intravenous Agents and Intubation Neuromuscular Blockers by Anesthesia Providers
1996-09-01
of this study to determine if experience of the provider made a difference in the agent chosen. Both quantitative and qualitative methods were...comparison of quantitative and qualitative data of induction and intubation agents collected from CRNAs and MDAs according to experience of both types of...providers was analyzed to provide meaningful data. The difference in choice of agents by experience was found not to be significant. IV CHOICE OF
Q and you: The application of Q methodology in recreation research
Whitney Ward
2010-01-01
Researchers have used various qualitative and quantitative methods to deal with subjectivity in studying people's recreation experiences. Q methodology has been the most effective approach for analyzing both qualitative and quantitative aspects of experience, including attitudes or perceptions. The method is composed of two main components--Q sorting and Q factor...
The Positive Alternative Credit Experience (PACE) Program a Quantitative Comparative Study
ERIC Educational Resources Information Center
Warren, Rebecca Anne
2011-01-01
The purpose of this quantitative comparative study was to evaluate the Positive Alternative Credit Experience (PACE) Program using an objectives-oriented approach to a formative program evaluation. The PACE Program was a semester-long high school alternative education program designed to serve students at-risk for academic failure or dropping out…
Taylor, Sean C; Mrkusich, Eli M
2014-01-01
In the past decade, the techniques of quantitative PCR (qPCR) and reverse transcription (RT)-qPCR have become accessible to virtually all research labs, producing valuable data for peer-reviewed publications and supporting exciting research conclusions. However, the experimental design and validation processes applied to the associated projects are the result of historical biases adopted by individual labs that have evolved and changed since the inception of the techniques and associated technologies. This has resulted in wide variability in the quality, reproducibility and interpretability of published data as a direct result of how each lab has designed their RT-qPCR experiments. The 'minimum information for the publication of quantitative real-time PCR experiments' (MIQE) was published to provide the scientific community with a consistent workflow and key considerations to perform qPCR experiments. We use specific examples to highlight the serious negative ramifications for data quality when the MIQE guidelines are not applied and include a summary of good and poor practices for RT-qPCR. © 2013 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Wang, Hui; Wang, Jian-Tao; Cao, Ze-Xian; Zhang, Wen-Jun; Lee, Chun-Sing; Lee, Shuit-Tong; Zhang, Xiao-Hong
2015-03-01
While the vapour-liquid-solid process has been widely used for growing one-dimensional nanostructures, quantitative understanding of the process is still far from adequate. For example, the origins for the growth of periodic one-dimensional nanostructures are not fully understood. Here we observe that morphologies in a wide range of periodic one-dimensional nanostructures can be described by two quantitative relationships: first, inverse of the periodic spacing along the length direction follows an arithmetic sequence; second, the periodic spacing in the growth direction varies linearly with the diameter of the nanostructure. We further find that these geometric relationships can be explained by considering the surface curvature oscillation of the liquid sphere at the tip of the growing nanostructure. The work reveals the requirements of vapour-liquid-solid growth. It can be applied for quantitative understanding of vapour-liquid-solid growth and to design experiments for controlled growth of nanostructures with custom-designed morphologies.
Lemurs and macaques show similar numerical sensitivity.
Jones, Sarah M; Pearson, John; DeWind, Nicholas K; Paulsen, David; Tenekedjieva, Ana-Maria; Brannon, Elizabeth M
2014-05-01
We investigated the precision of the approximate number system (ANS) in three lemur species (Lemur catta, Eulemur mongoz, and Eulemur macaco flavifrons), one Old World monkey species (Macaca mulatta) and humans (Homo sapiens). In Experiment 1, four individuals of each nonhuman primate species were trained to select the numerically larger of two visual arrays on a touchscreen. We estimated numerical acuity by modeling Weber fractions (w) and found quantitatively equivalent performance among all four nonhuman primate species. In Experiment 2, we tested adult humans in a similar procedure, and they outperformed the four nonhuman species but showed qualitatively similar performance. These results indicate that the ANS is conserved over the primate order.
Water-water correlations in electrolyte solutions probed by hyper-Rayleigh scattering
NASA Astrophysics Data System (ADS)
Shelton, David P.
2017-12-01
Long-range ion-induced correlations between water molecules have been observed by second-harmonic or hyper-Rayleigh scattering experiments with conflicting results. The most recent work observed a large difference between the results for H2O and D2O, and large discrepancies with the previously proposed theory. However, the present observations are in quantitative agreement with the model where the ion electric field induces second harmonic generation by the water molecules, and ion-ion correlations given by the Debye-Huckel theory account for intensity saturation at high ion concentration. This work compares experimental results with theory and addresses the apparent discrepancies with previous experiments.
Prepared stimuli enhance aversive learning without weakening the impact of verbal instructions
2018-01-01
Fear-relevant stimuli such as snakes and spiders are thought to capture attention due to evolutionary significance. Classical conditioning experiments indicate that these stimuli accelerate learning, while instructed extinction experiments suggest they may be less responsive to instructions. We manipulated stimulus type during instructed aversive reversal learning and used quantitative modeling to simultaneously test both hypotheses. Skin conductance reversed immediately upon instruction in both groups. However, fear-relevant stimuli enhanced dynamic learning, as measured by higher learning rates in participants conditioned with images of snakes and spiders. Results are consistent with findings that dissociable neural pathways underlie feedback-driven and instructed aversive learning. PMID:29339561
Interaction Metrics for Feedback Control of Sound Radiation from Stiffened Panels
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.; Cox, David E.; Gibbs, Gary P.
2003-01-01
Interaction metrics developed for the process control industry are used to evaluate decentralized control of sound radiation from bays on an aircraft fuselage. The metrics are applied to experimentally measured frequency response data from a model of an aircraft fuselage. The purpose is to understand how coupling between multiple bays of the fuselage can destabilize or limit the performance of a decentralized active noise control system. The metrics quantitatively verify observations from a previous experiment, in which decentralized controllers performed worse than centralized controllers. The metrics do not appear to be useful for explaining control spillover which was observed in a previous experiment.
Burger, Stefan; Fraunholz, Thomas; Leirer, Christian; Hoppe, Ronald H W; Wixforth, Achim; Peter, Malte A; Franke, Thomas
2013-06-25
Phase decomposition in lipid membranes has been the subject of numerous investigations by both experiment and theoretical simulation, yet quantitative comparisons of the simulated data to the experimental results are rare. In this work, we present a novel way of comparing the temporal development of liquid-ordered domains obtained from numerically solving the Cahn-Hilliard equation and by inducing a phase transition in giant unilamellar vesicles (GUVs). Quantitative comparison is done by calculating the structure factor of the domain pattern. It turns out that the decomposition takes place in three distinct regimes in both experiment and simulation. These regimes are characterized by different rates of growth of the mean domain diameter, and there is quantitative agreement between experiment and simulation as to the duration of each regime and the absolute rate of growth in each regime.
A test of the double-shearing model of flow for granular materials
Savage, J.C.; Lockner, D.A.
1997-01-01
The double-shearing model of flow attributes plastic deformation in granular materials to cooperative slip on conjugate Coulomb shears (surfaces upon which the Coulomb yield condition is satisfied). The strict formulation of the double-shearing model then requires that the slip lines in the material coincide with the Coulomb shears. Three different experiments that approximate simple shear deformation in granular media appear to be inconsistent with this strict formulation. For example, the orientation of the principal stress axes in a layer of sand driven in steady, simple shear was measured subject to the assumption that the Coulomb failure criterion was satisfied on some surfaces (orientation unspecified) within the sand layer. The orientation of the inferred principal compressive axis was then compared with the orientations predicted by the double-shearing model. The strict formulation of the model [Spencer, 1982] predicts that the principal stress axes should rotate in a sense opposite to that inferred from the experiments. A less restrictive formulation of the double-shearing model by de Josselin de Jong [1971] does not completely specify the solution but does prescribe limits on the possible orientations of the principal stress axes. The orientations of the principal compression axis inferred from the experiments are probably within those limits. An elastoplastic formulation of the double-shearing model [de Josselin de Jong, 1988] is reasonably consistent with the experiments, although quantitative agreement was not attained. Thus we conclude that the double-shearing model may be a viable law to describe deformation of granular materials, but the macroscopic slip surfaces will not in general coincide with the Coulomb shears.
Kim, Seongho; Carruthers, Nicholas; Lee, Joohyoung; Chinni, Sreenivasa; Stemmer, Paul
2016-12-01
Stable isotope labeling by amino acids in cell culture (SILAC) is a practical and powerful approach for quantitative proteomic analysis. A key advantage of SILAC is the ability to simultaneously detect the isotopically labeled peptides in a single instrument run and so guarantee relative quantitation for a large number of peptides without introducing any variation caused by separate experiment. However, there are a few approaches available to assessing protein ratios and none of the existing algorithms pays considerable attention to the proteins having only one peptide hit. We introduce new quantitative approaches to dealing with SILAC protein-level summary using classification-based methodologies, such as Gaussian mixture models with EM algorithms and its Bayesian approach as well as K-means clustering. In addition, a new approach is developed using Gaussian mixture model and a stochastic, metaheuristic global optimization algorithm, particle swarm optimization (PSO), to avoid either a premature convergence or being stuck in a local optimum. Our simulation studies show that the newly developed PSO-based method performs the best among others in terms of F1 score and the proposed methods further demonstrate the ability of detecting potential markers through real SILAC experimental data. No matter how many peptide hits the protein has, the developed approach can be applicable, rescuing many proteins doomed to removal. Furthermore, no additional correction for multiple comparisons is necessary for the developed methods, enabling direct interpretation of the analysis outcomes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Lake Representations in Global Climate Models: An End-User Perspective
NASA Astrophysics Data System (ADS)
Rood, R. B.; Briley, L.; Steiner, A.; Wells, K.
2017-12-01
The weather and climate in the Great Lakes region of the United States and Canada are strongly influenced by the lakes. Within global climate models, lakes are incorporated in many ways. If one is interested in quantitative climate information for the Great Lakes, then it is a first principle requirement that end-users of climate model simulation data, whether scientists or practitioners, need to know if and how lakes are incorporated into models. We pose the basic question, how are lakes represented in CMIP models? Despite significant efforts by the climate community to document and publish basic information about climate models, it is unclear how to answer the question about lake representations? With significant knowledge of the practice of the field, then a reasonable starting point is to use the ES-DOC Comparator (https://compare.es-doc.org/ ). Once at this interface to model information, the end-user is faced with the need for more knowledge about the practice and culture of the discipline. For example, lakes are often categorized as a type of land, a counterintuitive concept. In some models, though, lakes are specified in ocean models. There is little evidence and little confidence that the information obtained through this process is complete or accurate. In fact, it is verifiably not accurate. This experience, then, motivates identifying and finding either human experts or technical documentation for each model. The conclusion from this exercise is that it can take months or longer to provide a defensible answer to if and how lakes are represented in climate models. Our experience with lake finding is that this is not a unique experience. This talk documents our experience and explores barriers we have identified and strategies for reducing those barriers.
Giraud, Nicolas; Blackledge, Martin; Goldman, Maurice; Böckmann, Anja; Lesage, Anne; Penin, François; Emsley, Lyndon
2005-12-28
A detailed analysis of nitrogen-15 longitudinal relaxation times in microcrystalline proteins is presented. A theoretical model to quantitatively interpret relaxation times is developed in terms of motional amplitude and characteristic time scale. Different averaging schemes are examined in order to propose an analysis of relaxation curves that takes into account the specificity of MAS experiments. In particular, it is shown that magic angle spinning averages the relaxation rate experienced by a single spin over one rotor period, resulting in individual relaxation curves that are dependent on the orientation of their corresponding carousel with respect to the rotor axis. Powder averaging thus leads to a nonexponential behavior in the observed decay curves. We extract dynamic information from experimental decay curves, using a diffusion in a cone model. We apply this study to the analysis of spin-lattice relaxation rates of the microcrystalline protein Crh at two different fields and determine differential dynamic parameters for several residues in the protein.
Boydell, K M; Everett, B
1992-01-01
Supported housing (as distinct from supportive housing) emphasizes the values of consumer choice; independence; participation; permanence; normalcy; and flexible, ongoing supports. As a model, it has only recently become popular in the literature and therefore little is known of its effectiveness in serving people with long-term psychiatric backgrounds. In 1989, Homeward Projects, a community mental health agency located in Metropolitan Toronto, established a supported housing project. Homeward included an evaluative component in its program from the outset. In order to give equal weight to the tenants' opinions, both quantitative and qualitative methodologies were employed. In the quantitative component, residential milieu, social support, and service delivery were examined. The qualitative component involved an ethnographic study which allowed the tenants to voice their experiences of living in such a setting. Results provided a rich understanding of the model. Overall, the tenants eventually came to describe their house as a home.
NASA Technical Reports Server (NTRS)
Kaufman, Yoram
1999-01-01
Langley's remarkable solar and lunar spectra collected from Mt. Whitney inspired Arrhenius to develop the first quantitative climate model in 1896. In 1999, NASA's Earth Observing AM Satellite (EOS-Terra) will repeat Langley's experiment, but for the entire planet, thus pioneering a wide array of calibrated spectral observations from space of the Earth System. Conceived in response to real environmental problems, EOS-Terra, in conjunction with other international satellite efforts, will fill a major gap in current efforts by providing quantitative global data sets with a resolution of few kilometers on the physical, chemical and biological elements of the earth system. Thus, like Langley's data, EOS-Terra can revolutionize climate research by inspiring a new generation of climate system models and enable us to assess the human impact on the environment. In the talk I shall review the historical developments that brought to the Terra mission, its objectives and example of application to biomass burning.
Liu, Jian-ping
2011-05-01
The core of evidence-based medicine lies in implementing the current best available evidence of clinical research to direct the decision making in clinical practice, incorporation of individual experience and value and preference of patients. However, the current evaluation method for clinical therapeutic effect cannot reflect the humanity and wholesomeness as well as individualized tailored treatment of Chinese medicine (CM) by using randomized controlled trials. This assay addressed the complex intervention of highly individualized treatment of CM and its societal characteristics, and the author proposes a model for the evaluation of therapeutic effects of CM in which quantitative and qualitative methods are combined, embodying the characteristics of the social and natural sciences in CM. The model can show the dynamic process of CM diagnosis and treatment from a perspective of the whole system and can be used for the evaluation of complex intervention of CM. We hope to raise a different thinking and method from the new drug development in the therapeutic effect evaluation.
Stratification Modelling of Key Bacterial Taxa Driven by Metabolic Dynamics in Meromictic Lakes.
Zhu, Kaicheng; Lauro, Federico M; Su, Haibin
2018-06-22
In meromictic lakes, the water column is stratified into distinguishable steady layers with different physico-chemical properties. The bottom portion, known as monimolimnion, has been studied for the functional stratification of microbial populations. Recent experiments have reported the profiles of bacterial and nutrient spatial distributions, but quantitative understanding is invoked to unravel the underlying mechanism of maintaining the discrete spatial organization. Here a reaction-diffusion model is developed to highlight the spatial pattern coupled with the light-driven metabolism of bacteria, which is resilient to a wide range of dynamical correlation between bacterial and nutrient species at the molecular level. Particularly, exact analytical solutions of the system are presented together with numerical results, in a good agreement with measurements in Ace lake and Rogoznica lake. Furthermore, one quantitative prediction is reported here on the dynamics of the seasonal stratification patterns in Ace lake. The active role played by the bacterial metabolism at microscale clearly shapes the biogeochemistry landscape of lake-wide ecology at macroscale.
The image of mathematics held by Irish post-primary students
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-08-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for 'image of mathematics' was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research focused on students studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. Students were aged between 15 and 18 years. A questionnaire was constructed with both quantitative and qualitative aspects. The questionnaire survey was completed by 356 post-primary students. Responses were analysed quantitatively using Statistical Package for the Social Sciences (SPSS) and qualitatively using the constant comparative method of analysis and by reviewing individual responses. Findings provide an insight into Irish post-primary students' images of mathematics and offer a means for constructing a theoretical model of image of mathematics which could be beneficial for future research.
Atomic Scale Structure of (001) Hydrogen-Induced Platelets in Germanium
NASA Astrophysics Data System (ADS)
David, Marie-Laure; Pizzagalli, Laurent; Pailloux, Fréderic; Barbot, Jean François
2009-04-01
An accurate characterization of the structure of hydrogen-induced platelets is a prerequisite for investigating both hydrogen aggregation and formation of larger defects. On the basis of quantitative high resolution transmission electron microscopy experiments combined with extensive first principles calculations, we present a model for the atomic structure of (001) hydrogen-induced platelets in germanium. It involves broken Ge-Ge bonds in the [001] direction that are dihydride passivated, vacancies, and trapped H2 molecules, showing that the species involved in platelet formation depend on the habit plane. This model explains all previous experimental observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy
Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importancemore » as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.« less
ACCELERATED FAILURE TIME MODELS PROVIDE A USEFUL STATISTICAL FRAMEWORK FOR AGING RESEARCH
Swindell, William R.
2009-01-01
Survivorship experiments play a central role in aging research and are performed to evaluate whether interventions alter the rate of aging and increase lifespan. The accelerated failure time (AFT) model is seldom used to analyze survivorship data, but offers a potentially useful statistical approach that is based upon the survival curve rather than the hazard function. In this study, AFT models were used to analyze data from 16 survivorship experiments that evaluated the effects of one or more genetic manipulations on mouse lifespan. Most genetic manipulations were found to have a multiplicative effect on survivorship that is independent of age and well-characterized by the AFT model “deceleration factor”. AFT model deceleration factors also provided a more intuitive measure of treatment effect than the hazard ratio, and were robust to departures from modeling assumptions. Age-dependent treatment effects, when present, were investigated using quantile regression modeling. These results provide an informative and quantitative summary of survivorship data associated with currently known long-lived mouse models. In addition, from the standpoint of aging research, these statistical approaches have appealing properties and provide valuable tools for the analysis of survivorship data. PMID:19007875
Accelerated failure time models provide a useful statistical framework for aging research.
Swindell, William R
2009-03-01
Survivorship experiments play a central role in aging research and are performed to evaluate whether interventions alter the rate of aging and increase lifespan. The accelerated failure time (AFT) model is seldom used to analyze survivorship data, but offers a potentially useful statistical approach that is based upon the survival curve rather than the hazard function. In this study, AFT models were used to analyze data from 16 survivorship experiments that evaluated the effects of one or more genetic manipulations on mouse lifespan. Most genetic manipulations were found to have a multiplicative effect on survivorship that is independent of age and well-characterized by the AFT model "deceleration factor". AFT model deceleration factors also provided a more intuitive measure of treatment effect than the hazard ratio, and were robust to departures from modeling assumptions. Age-dependent treatment effects, when present, were investigated using quantile regression modeling. These results provide an informative and quantitative summary of survivorship data associated with currently known long-lived mouse models. In addition, from the standpoint of aging research, these statistical approaches have appealing properties and provide valuable tools for the analysis of survivorship data.
ERIC Educational Resources Information Center
Kabael, Tangul; Akin, Ayca
2018-01-01
The aim of this research is to examine prospective mathematics teachers' quantitative reasoning, their support for students' quantitative reasoning and the relationship between them, if any. The teaching experiment was used as the research method in this qualitatively designed study. The data of the study were collected through a series of…
A high-efficiency regime for gas-phase terahertz lasers.
Wang, Fan; Lee, Jeongwon; Phillips, Dane J; Holliday, Samuel G; Chua, Song-Liang; Bravo-Abad, Jorge; Joannopoulos, John D; Soljačić, Marin; Johnson, Steven G; Everitt, Henry O
2018-06-11
We present both an innovative theoretical model and an experimental validation of a molecular gas optically pumped far-infrared (OPFIR) laser at 0.25 THz that exhibits 10× greater efficiency (39% of the Manley-Rowe limit) and 1,000× smaller volume than comparable commercial lasers. Unlike previous OPFIR-laser models involving only a few energy levels that failed even qualitatively to match experiments at high pressures, our ab initio theory matches experiments quantitatively, within experimental uncertainties with no free parameters, by accurately capturing the interplay of millions of degrees of freedom in the laser. We show that previous OPFIR lasers were inefficient simply by being too large and that high powers favor high pressures and small cavities. We believe that these results will revive interest in OPFIR laser as a powerful and compact source of terahertz radiation.
Old and new news about single-photon sensitivity in human vision
NASA Astrophysics Data System (ADS)
Nelson, Philip
It is sometimes said that ``our eyes can see single photons,'' when in fact the faintest flash of light that can reliably be reported by human subjects is closer to 100 photons. Nevertheless, there is a sense in which the familiar claim is true. Experiments conducted long after the seminal work of Hecht, Shlaer, and Pirenne in two distinct realms, those of human psychophysics and single-cell physiology, now admit a more precisem conclusion to be drawn about our visual apparatus. Finding a single framework that accommodates both kinds of result is a nontrivial challenge, and one that sets severe quantitative constraints on any model of dim-light visual processing. I will present one such model and compare it to a recent experiment. Partially supported by the NSF under Grants EF-0928048 and DMR-0832802.
Modeling genome coverage in single-cell sequencing
Daley, Timothy; Smith, Andrew D.
2014-01-01
Motivation: Single-cell DNA sequencing is necessary for examining genetic variation at the cellular level, which remains hidden in bulk sequencing experiments. But because they begin with such small amounts of starting material, the amount of information that is obtained from single-cell sequencing experiment is highly sensitive to the choice of protocol employed and variability in library preparation. In particular, the fraction of the genome represented in single-cell sequencing libraries exhibits extreme variability due to quantitative biases in amplification and loss of genetic material. Results: We propose a method to predict the genome coverage of a deep sequencing experiment using information from an initial shallow sequencing experiment mapped to a reference genome. The observed coverage statistics are used in a non-parametric empirical Bayes Poisson model to estimate the gain in coverage from deeper sequencing. This approach allows researchers to know statistical features of deep sequencing experiments without actually sequencing deeply, providing a basis for optimizing and comparing single-cell sequencing protocols or screening libraries. Availability and implementation: The method is available as part of the preseq software package. Source code is available at http://smithlabresearch.org/preseq. Contact: andrewds@usc.edu Supplementary information: Supplementary material is available at Bioinformatics online. PMID:25107873
Modeling Fan Effects on the Time Course of Associative Recognition
Schneider, Darryl W.; Anderson, John R.
2011-01-01
We investigated the time course of associative recognition using the response signal procedure, whereby a stimulus is presented and followed after a variable lag by a signal indicating that an immediate response is required. More specifically, we examined the effects of associative fan (the number of associations that an item has with other items in memory) on speed–accuracy tradeoff functions obtained in a previous response signal experiment involving briefly studied materials and in a new experiment involving well-learned materials. High fan lowered asymptotic accuracy or the rate of rise in accuracy across lags, or both. We developed an Adaptive Control of Thought–Rational (ACT-R) model for the response signal procedure to explain these effects. The model assumes that high fan results in weak associative activation that slows memory retrieval, thereby decreasing the probability that retrieval finishes in time and producing a speed–accuracy tradeoff function. The ACT-R model provided an excellent account of the data, yielding quantitative fits that were as good as those of the best descriptive model for response signal data. PMID:22197797
Relevance and limitations of crowding, fractal, and polymer models to describe nuclear architecture.
Huet, Sébastien; Lavelle, Christophe; Ranchon, Hubert; Carrivain, Pascal; Victor, Jean-Marc; Bancaud, Aurélien
2014-01-01
Chromosome architecture plays an essential role for all nuclear functions, and its physical description has attracted considerable interest over the last few years among the biophysics community. These researches at the frontiers of physics and biology have been stimulated by the demand for quantitative analysis of molecular biology experiments, which provide comprehensive data on chromosome folding, or of live cell imaging experiments that enable researchers to visualize selected chromosome loci in living or fixed cells. In this review our goal is to survey several nonmutually exclusive models that have emerged to describe the folding of DNA in the nucleus, the dynamics of proteins in the nucleoplasm, or the movements of chromosome loci. We focus on three classes of models, namely molecular crowding, fractal, and polymer models, draw comparisons, and discuss their merits and limitations in the context of chromosome structure and dynamics, or nuclear protein navigation in the nucleoplasm. Finally, we identify future challenges in the roadmap to a unified model of the nuclear environment. © 2014 Elsevier Inc. All rights reserved.
A simultaneous multimodal imaging system for tissue functional parameters
NASA Astrophysics Data System (ADS)
Ren, Wenqi; Zhang, Zhiwu; Wu, Qiang; Zhang, Shiwu; Xu, Ronald
2014-02-01
Simultaneous and quantitative assessment of skin functional characteristics in different modalities will facilitate diagnosis and therapy in many clinical applications such as wound healing. However, many existing clinical practices and multimodal imaging systems are subjective, qualitative, sequential for multimodal data collection, and need co-registration between different modalities. To overcome these limitations, we developed a multimodal imaging system for quantitative, non-invasive, and simultaneous imaging of cutaneous tissue oxygenation and blood perfusion parameters. The imaging system integrated multispectral and laser speckle imaging technologies into one experimental setup. A Labview interface was developed for equipment control, synchronization, and image acquisition. Advanced algorithms based on a wide gap second derivative reflectometry and laser speckle contrast analysis (LASCA) were developed for accurate reconstruction of tissue oxygenation and blood perfusion respectively. Quantitative calibration experiments and a new style of skinsimulating phantom were designed to verify the accuracy and reliability of the imaging system. The experimental results were compared with a Moor tissue oxygenation and perfusion monitor. For In vivo testing, a post-occlusion reactive hyperemia (PORH) procedure in human subject and an ongoing wound healing monitoring experiment using dorsal skinfold chamber models were conducted to validate the usability of our system for dynamic detection of oxygenation and perfusion parameters. In this study, we have not only setup an advanced multimodal imaging system for cutaneous tissue oxygenation and perfusion parameters but also elucidated its potential for wound healing assessment in clinical practice.
Fractional calculus phenomenology in two-dimensional plasma models
NASA Astrophysics Data System (ADS)
Gustafson, Kyle; Del Castillo Negrete, Diego; Dorland, Bill
2006-10-01
Transport processes in confined plasmas for fusion experiments, such as ITER, are not well-understood at the basic level of fully nonlinear, three-dimensional kinetic physics. Turbulent transport is invoked to describe the observed levels in tokamaks, which are orders of magnitude greater than the theoretical predictions. Recent results show the ability of a non-diffusive transport model to describe numerical observations of turbulent transport. For example, resistive MHD modeling of tracer particle transport in pressure-gradient driven turbulence for a three-dimensional plasma reveals that the superdiffusive (2̂˜t^α where α> 1) radial transport in this system is described quantitatively by a fractional diffusion equation Fractional calculus is a generalization involving integro-differential operators, which naturally describe non-local behaviors. Our previous work showed the quantitative agreement of special fractional diffusion equation solutions with numerical tracer particle flows in time-dependent linearized dynamics of the Hasegawa-Mima equation (for poloidal transport in a two-dimensional cold-ion plasma). In pursuit of a fractional diffusion model for transport in a gyrokinetic plasma, we now present numerical results from tracer particle transport in the nonlinear Hasegawa-Mima equation and a planar gyrokinetic model. Finite Larmor radius effects will be discussed. D. del Castillo Negrete, et al, Phys. Rev. Lett. 94, 065003 (2005).
Palacio-Torralba, Javier; Hammer, Steven; Good, Daniel W; Alan McNeill, S; Stewart, Grant D; Reuben, Robert L; Chen, Yuhang
2015-01-01
Although palpation has been successfully employed for centuries to assess soft tissue quality, it is a subjective test, and is therefore qualitative and depends on the experience of the practitioner. To reproduce what the medical practitioner feels needs more than a simple quasi-static stiffness measurement. This paper assesses the capacity of dynamic mechanical palpation to measure the changes in viscoelastic properties that soft tissue can exhibit under certain pathological conditions. A diagnostic framework is proposed to measure elastic and viscous behaviors simultaneously using a reduced set of viscoelastic parameters, giving a reliable index for quantitative assessment of tissue quality. The approach is illustrated on prostate models reconstructed from prostate MRI scans. The examples show that the change in viscoelastic time constant between healthy and cancerous tissue is a key index for quantitative diagnostics using point probing. The method is not limited to any particular tissue or material and is therefore useful for tissue where defining a unique time constant is not trivial. The proposed framework of quantitative assessment could become a useful tool in clinical diagnostics for soft tissue. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Matta, Chérif F; Arabi, Alya A
2011-06-01
The use of electron density-based molecular descriptors in drug research, particularly in quantitative structure--activity relationships/quantitative structure--property relationships studies, is reviewed. The exposition starts by a discussion of molecular similarity and transferability in terms of the underlying electron density, which leads to a qualitative introduction to the quantum theory of atoms in molecules (QTAIM). The starting point of QTAIM is the topological analysis of the molecular electron-density distributions to extract atomic and bond properties that characterize every atom and bond in the molecule. These atomic and bond properties have considerable potential as bases for the construction of robust quantitative structure--activity/property relationships models as shown by selected examples in this review. QTAIM is applicable to the electron density calculated from quantum-chemical calculations and/or that obtained from ultra-high resolution x-ray diffraction experiments followed by nonspherical refinement. Atomic and bond properties are introduced followed by examples of application of each of these two families of descriptors. The review ends with a study whereby the molecular electrostatic potential, uniquely determined by the density, is used in conjunction with atomic properties to elucidate the reasons for the biological similarity of bioisosteres.
Quantitative Adverse Outcome Pathways and Their ...
A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course predictions that can support regulatory decision-making. Herein we describe several facets of qAOPs, including (a) motivation for development, (b) technical considerations, (c) evaluation of confidence, and (d) potential applications. The qAOP used as an illustrative example for these points describes the linkage between inhibition of cytochrome P450 19A aromatase (the MIE) and population-level decreases in the fathead minnow (FHM; Pimephales promelas). The qAOP consists of three linked computational models for the following: (a) the hypothalamic-pitutitary-gonadal axis in female FHMs, where aromatase inhibition decreases the conversion of testosterone to 17β-estradiol (E2), thereby reducing E2-dependent vitellogenin (VTG; egg yolk protein precursor) synthesis, (b) VTG-dependent egg development and spawning (fecundity), and (c) fecundity-dependent population trajectory. While development of the example qAOP was based on experiments with FHMs exposed to the aromatase inhibitor fadrozole, we also show how a toxic equivalence (TEQ) calculation allows use of the qAOP to predict effects of another, untested aromatase inhibitor, iprodione. While qAOP development can be resource-intensive, the quan
Geoscientific process monitoring with positron emission tomography (GeoPET)
NASA Astrophysics Data System (ADS)
Kulenkampff, Johannes; Gründig, Marion; Zakhnini, Abdelhamid; Lippmann-Pipke, Johanna
2016-08-01
Transport processes in geomaterials can be observed with input-output experiments, which yield no direct information on the impact of heterogeneities, or they can be assessed by model simulations based on structural imaging using µ-CT. Positron emission tomography (PET) provides an alternative experimental observation method which directly and quantitatively yields the spatio-temporal distribution of tracer concentration. Process observation with PET benefits from its extremely high sensitivity together with a resolution that is acceptable in relation to standard drill core sizes. We strongly recommend applying high-resolution PET scanners in order to achieve a resolution on the order of 1 mm. We discuss the particularities of PET applications in geoscientific experiments (GeoPET), which essentially are due to high material density. Although PET is rather insensitive to matrix effects, mass attenuation and Compton scattering have to be corrected thoroughly in order to derive quantitative values. Examples of process monitoring of advection and diffusion processes with GeoPET illustrate the procedure and the experimental conditions, as well as the benefits and limits of the method.
Xian, Yu; Wang, Meie; Chen, Weiping
2015-11-01
Soil enzyme activities are greatly influenced by soil properties and could be significant indicators of heavy metal toxicity in soil for bioavailability assessment. Two groups of experiments were conducted to determine the joint effects of heavy metals and soil properties on soil enzyme activities. Results showed that arylsulfatase was the most sensitive soil enzyme and could be used as an indicator to study the enzymatic toxicity of heavy metals under various soil properties. Soil organic matter (SOM) was the dominant factor affecting the activity of arylsulfatase in soil. A quantitative model was derived to predict the changes of arylsulfatase activity with SOM content. When the soil organic matter content was less than the critical point A (1.05% in our study), the arylsulfatase activity dropped rapidly. When the soil organic matter content was greater than the critical point A, the arylsulfatase activity gradually rose to higher levels showing that instead of harm the soil microbial activities were enhanced. The SOM content needs to be over the critical point B (2.42% in our study) to protect its microbial community from harm due to the severe Pb pollution (500mgkg(-1) in our study). The quantitative model revealed the pattern of variation of enzymatic toxicity due to heavy metals under various SOM contents. The applicability of the model under wider soil properties need to be tested. The model however may provide a methodological basis for ecological risk assessment of heavy metals in soil. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Mitchell-Koch, Jeremy T.; Reid, Kendra R.; Meyerhoff, Mark E.
2008-01-01
An experiment for the undergraduate quantitative analysis laboratory involving applications of visible spectrophotometry is described. Salicylate, a component found in several medications, as well as the active by-product of aspirin decomposition, is quantified. The addition of excess iron(III) to a solution of salicylate generates a deeply…
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Design and study of water supply system for supercritical unit boiler in thermal power station
NASA Astrophysics Data System (ADS)
Du, Zenghui
2018-04-01
In order to design and optimize the boiler feed water system of supercritical unit, the establishment of a highly accurate controlled object model and its dynamic characteristics are prerequisites for developing a perfect thermal control system. In this paper, the method of mechanism modeling often leads to large systematic errors. Aiming at the information contained in the historical operation data of the boiler typical thermal system, the modern intelligent identification method to establish a high-precision quantitative model is used. This method avoids the difficulties caused by the disturbance experiment modeling for the actual system in the field, and provides a strong reference for the design and optimization of the thermal automation control system in the thermal power plant.
Measurement with microscopic MRI and simulation of flow in different aneurysm models.
Edelhoff, Daniel; Walczak, Lars; Frank, Frauke; Heil, Marvin; Schmitz, Inge; Weichert, Frank; Suter, Dieter
2015-10-01
The impact and the development of aneurysms depend to a significant degree on the exchange of liquid between the regular vessel and the pathological extension. A better understanding of this process will lead to improved prediction capabilities. The aim of the current study was to investigate fluid-exchange in aneurysm models of different complexities by combining microscopic magnetic resonance measurements with numerical simulations. In order to evaluate the accuracy and applicability of these methods, the fluid-exchange process between the unaltered vessel lumen and the aneurysm phantoms was analyzed quantitatively using high spatial resolution. Magnetic resonance flow imaging was used to visualize fluid-exchange in two different models produced with a 3D printer. One model of an aneurysm was based on histological findings. The flow distribution in the different models was measured on a microscopic scale using time of flight magnetic resonance imaging. The whole experiment was simulated using fast graphics processing unit-based numerical simulations. The obtained simulation results were compared qualitatively and quantitatively with the magnetic resonance imaging measurements, taking into account flow and spin-lattice relaxation. The results of both presented methods compared well for the used aneurysm models and the chosen flow distributions. The results from the fluid-exchange analysis showed comparable characteristics concerning measurement and simulation. Similar symmetry behavior was observed. Based on these results, the amount of fluid-exchange was calculated. Depending on the geometry of the models, 7% to 45% of the liquid was exchanged per second. The result of the numerical simulations coincides well with the experimentally determined velocity field. The rate of fluid-exchange between vessel and aneurysm was well-predicted. Hence, the results obtained by simulation could be validated by the experiment. The observed deviations can be caused by the noise in the measurement and by the limited resolution of the simulation. The resulting differences are small enough to allow reliable predictions of the flow distribution in vessels with stents and for pulsed blood flow.
Interfacial self-healing of nanocomposite hydrogels: Theory and experiment
NASA Astrophysics Data System (ADS)
Wang, Qiming; Gao, Zheming; Yu, Kunhao
2017-12-01
Polymers with dynamic bonds are able to self-heal their fractured interfaces and restore the mechanical strengths. It is largely elusive how to analytically model this self-healing behavior to construct the mechanistic relationship between the self-healing properties (e.g., healed interfacial strength and equilibrium healing time) and the material compositions and healing conditions. Here, we take a self-healable nanocomposite hydrogel as an example to illustrate an interfacial self-healing theory for hydrogels with dynamic bonds. In the theory, we consider the free polymer chains diffuse across the interface and reform crosslinks to bridge the interface. We analytically reveal that the healed strengths of nanocomposite hydrogels increase with the healing time in an error-function-like form. The equilibrium self-healing time of the full-strength recovery decreases with the temperature and increases with the nanoparticle concentration. We further analytically reveal that the healed interfacial strength decreases with increasing delaying time before the healing process. The theoretical results quantitatively match with our experiments on nanosilica hydrogels, and also agree well with other researchers' experiments on nanoclay hydrogels. We expect that this theory would open promising avenues for quantitative understanding of the self-healing mechanics of various polymers with dynamic bonds, and offer insights for designing high-performance self-healing polymers.
Edinger, Magnus; Knopp, Matthias Manne; Kerdoncuff, Hugo; Rantanen, Jukka; Rades, Thomas; Löbmann, Korbinian
2018-05-30
In this study, the influence of drug load on the microwave-induced amorphization of celecoxib (CCX) in polyvinylpyrrolidone (PVP) tablets was investigated using quantitative transmission Raman spectroscopy. A design of experiments (DoE) setup was applied for developing the quantitative model using two factors: drug load (10, 30, and 50% w/w) and amorphous fraction (0, 25, 50, 75 and 100%). The data was modeled using partial least-squares (PLS) regression and resulted in a robust model with a root mean-square error of prediction of 2.5%. The PLS model was used to study the amorphization kinetics of CCX-PVP tablets with different drug content (10, 20, 30, 40 and 50% w/w). For this purpose, transition Raman spectra were collected in 60 s intervals over a total microwave time of 10 min with an energy input of 1000 W. Using the quantitative model it was possible to measure the amorphous fraction of the tablets and follow the amorphization as a function of microwaving time. The relative amorphous fraction of CCX increased with increasing microwaving time and decreasing drug load, hence 90 ± 7% of the drug was amorphized in the tablets with 10% drug load whereas only 31 ± 7% of the drug was amorphized in the 50% CCX tablets. It is suggested that the degree of amorphization depends on drug loading. The likelihood of drug particles being in direct contact with the polymer PVP is a requirement for the dissolution of the drug into the polymer upon microwaving, and this is reduced with increasing drug load. This was further supported by polarized light microscopy that revealed evidence of crystalline particles and clusters in all the microwaved tablets. Copyright © 2018 Elsevier B.V. All rights reserved.
Hamilton, Kerry A; Weir, Mark H; Haas, Charles N
2017-02-01
Mycobacterium avium complex (MAC) is a group of environmentally-transmitted pathogens of great public health importance. This group is known to be harbored, amplified, and selected for more human-virulent characteristics by amoeba species in aquatic biofilms. However, a quantitative microbial risk assessment (QMRA) has not been performed due to the lack of dose response models resulting from significant heterogeneity within even a single species or subspecies of MAC, as well as the range of human susceptibilities to mycobacterial disease. The primary human-relevant species and subspecies responsible for the majority of the human disease burden and present in drinking water, biofilms, and soil are M. avium subsp. hominissuis, M. intracellulare, and M. chimaera. A critical review of the published literature identified important health endpoints, exposure routes, and susceptible populations for MAC risk assessment. In addition, data sets for quantitative dose-response functions were extracted from published in vivo animal dosing experiments. As a result, seven new exponential dose response models for human-relevant species of MAC with endpoints of lung lesions, death, disseminated infection, liver infection, and lymph node lesions are proposed. Although current physical and biochemical tests used in clinical settings do not differentiate between M. avium and M. intracellulare, differentiating between environmental species and subspecies of the MAC can aid in the assessment of health risks and control of MAC sources. A framework is proposed for incorporating the proposed dose response models into susceptible population- and exposure route-specific QMRA models. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Computational Model of Liver Iron Metabolism
Mitchell, Simon; Mendes, Pedro
2013-01-01
Iron is essential for all known life due to its redox properties; however, these same properties can also lead to its toxicity in overload through the production of reactive oxygen species. Robust systemic and cellular control are required to maintain safe levels of iron, and the liver seems to be where this regulation is mainly located. Iron misregulation is implicated in many diseases, and as our understanding of iron metabolism improves, the list of iron-related disorders grows. Recent developments have resulted in greater knowledge of the fate of iron in the body and have led to a detailed map of its metabolism; however, a quantitative understanding at the systems level of how its components interact to produce tight regulation remains elusive. A mechanistic computational model of human liver iron metabolism, which includes the core regulatory components, is presented here. It was constructed based on known mechanisms of regulation and on their kinetic properties, obtained from several publications. The model was then quantitatively validated by comparing its results with previously published physiological data, and it is able to reproduce multiple experimental findings. A time course simulation following an oral dose of iron was compared to a clinical time course study and the simulation was found to recreate the dynamics and time scale of the systems response to iron challenge. A disease state simulation of haemochromatosis was created by altering a single reaction parameter that mimics a human haemochromatosis gene (HFE) mutation. The simulation provides a quantitative understanding of the liver iron overload that arises in this disease. This model supports and supplements understanding of the role of the liver as an iron sensor and provides a framework for further modelling, including simulations to identify valuable drug targets and design of experiments to improve further our knowledge of this system. PMID:24244122
Chemical Bonding in Sulfide Minerals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaughan, David J.; Rosso, Kevin M.
An understanding of chemical bonding and electronic structure in sulfide minerals is central to any attempt at understanding their crystal structures, stabilities and physical properties. It is also an essential precursor to understanding reactivity through modeling surface structure at the molecular scale. In recent decades, there have been remarkable advances in first principles (ab initio) methods for the quantitative calculation of electronic structure. These advances have been made possible by the very rapid development of high performance computers. Several review volumes that chart the applications of these developments in mineralogy and geochemistry are available (Tossell and Vaughan, 1992; Cygan andmore » Kubicki, 2001). An important feature of the sulfide minerals is the diversity of their electronic structures, as evidenced by their electrical and magnetic properties (see Pearce et al. 2006, this volume). Thus, sulfide minerals range from insulators through semiconductors to metals, and exhibit every type of magnetic behavior. This has presented problems for those attempting to develop bonding models for sulfides, and also led to certain misconceptions regarding the kinds of models that may be appropriate. In this chapter, chemical bonding and electronic structure models for sulfides are reviewed with emphasis on more recent developments. Although the fully ab initio quantitative methods are now capable of a remarkable degree of sophistication in terms of agreement with experiment and potential to interpret and predict behavior with varying conditions, both qualitative and more simplistic quantitative approaches will also be briefly discussed. This is because we believe that the insights which they provide are still helpful to those studying sulfide minerals. In addition to the application of electronic structure models and calculations to solid sulfides, work on sulfide mineral surfaces (Rosso and Vaughan 2006a,b) and solution complexes and clusters (Rickard and Luther, 2006) are discussed in detail later in this volume.« less
A signal detection theory analysis of an unconscious perception effect.
Haase, S J; Theios, J; Jenison, R
1999-07-01
The independent observation model (Macmillan & Creelman, 1991) is fitted to detection-identification data collected under conditions of heavy masking. The model accurately predicts a quantitative relationship between stimulus detection and stimulus identification over a wide range of detection performance. This model can also be used to offer a signal detection interpretation of the common finding of above-chance identification following a missed signal. While our finding is not a new one, the stimuli used in this experiment (redundant three-letter strings) differ slightly from those used in traditional signal detection work. Also, the stimuli were presented very briefly and heavily masked, conditions typical in the study of unconscious perception effects.
Modelling Influence and Opinion Evolution in Online Collective Behaviour
Gend, Pascal; Rentfrow, Peter J.; Hendrickx, Julien M.; Blondel, Vincent D.
2016-01-01
Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants’ past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection. PMID:27336834
Quinn, T. Alexander; Kohl, Peter
2013-01-01
Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215